AI & SOCIETY

, Volume 27, Issue 4, pp 431–436

On the irrationality of mind-uploading: a rely to Neil Levy

Authors

    • Philosophy ProgrammeVictoria University of Wellington
Original Article

DOI: 10.1007/s00146-011-0333-7

Cite this article as:
Agar, N. AI & Soc (2012) 27: 431. doi:10.1007/s00146-011-0333-7
  • 405 Views

Abstract

In a paper in this journal, Neil Levy challenges Nicholas Agar’s argument for the irrationality of mind-uploading. Mind-uploading is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer. Its advocates suppose that mind-uploading transfers both human minds and identities from biological brains into computers. According to Agar’s original argument, mind-uploading is prudentially irrational. Success relies on the soundness of the program of Strong AI—the view that it may someday be possible to build a computer that is capable of thought. Strong AI may in fact be false, an eventuality with dire consequences for mind-uploading. Levy argues that Agar’s argument relies on mistakes about the probability of failed mind-uploading and underestimates what is to be gained from successfully mind-uploading. This paper clarifies Agar’s original claims about the likelihood of mind-uploading failure and offers further defense of a pessimistic evaluation of success.

Keywords

Mind-uploadingStrong AIPascal’s Wager

1 On the irrationality of mind-uploading: a rely to Neil Levy

In a paper in this journal, Neil Levy (Levy 2011) challenges my argument (Agar 2010: chapter 4) for the prudential irrationality of mind-uploading. Mind-uploading is a futuristic process that involves scanning brains and recording relevant information which is then transferred into a computer.1 According to its advocates, uploading transfers both human minds and identities from biological brains into computers. Uploaded humans will enjoy benefits of enhanced cognition unavailable to those who retain their biological brains. While mind-uploading is not currently possible, advances in computer technology could make it available soon.

I call my argument against the rationality of mind-uploading, Searle’s Wager. This name acknowledges two philosophical precedents. First, it recognizes John Searle (Searle 1980), the best-known critic of the program of Strong AI—the view that it may someday be possible to build a computer that is capable of thought. It also acknowledges Blaise Pascal (Pascal 1995) who, lacking proof of God’s existence, presented his (in)famous Wager Argument for the prudential rationality of belief even when in doubt about God’s ontological status.

Searle’s Wager imagines candidates for mind-uploading being asked to place a bet. The success of mind-uploading is contingent on the truth of Strong AI. If Strong AI is a correct view then the procedure may work. Uploaded humans can enjoy a variety of enhancements denied to biological humans. Conversely, if Strong AI is a false view, then no computer could ever serve as a receptacle for a human mind. Mind-uploading inevitably fails. I argue that even those convinced by the philosophical arguments for Strong AI and therefore of the possibility of mind-uploading should allow that there is a non-negligible chance that they are, in fact, mistaken. I combine the claim that there is a significant chance that mind-uploading will fail with the claim that comparatively little is gained if the process is successful and much is lost if it fails. Hence, my conclusion that mind-uploading is prudentially irrational.

Searle’s Wager assumes that those undergoing mind-uploading consent to the destruction of their original brains and bodies. This assumption seems warranted if the process is to be viewed as a means of transferring not only their minds but also their identities. Those who choose to make electronic copies of themselves must confront the consequences of the possibly divergent agendas of biological humans and machine minds that are free to become more powerful with possibly exponential improvements in computing technologies.2

Levy mounts a reductio ad absurdum of my central line of reasoning. He allows that there is a non-zero probability that Searle is right and mind-uploading is fatal. He makes the point that there is also a non-zero probability that many minor modifications to the genes or structure of the human brain will prove fatal. Indeed, there is a non-zero probability that just thinking hard will be fatal (further complicated by the fact that there is a non-zero probability that refraining from thinking hard will be fatal.) To quote Levy, “oops.” It follows that the principle to which I appeal cannot be action-guiding. I will show how an appropriate interpretation of the Wager avoids this reductio. There is a probabilistic threshold that various risks of death must exceed in order to prompt rational avoidance. The risk of death from failed mind-uploading exceeds this threshold. Levy proposes also that I mischaracterize the payoffs of the Wager. Its advocates market mind-uploading as enabling a dazzling array of cognitive and other enhancements. I argue that we place greater value on the more modest enhancements that do not require the destruction of our brains and bodies and their replacement by machines. I respond to Levy’s argument that this maneuvre illegitimately minimizes benefits from mind-uploading.

2 Probabilities really do matter

My recognition of Pascal’s argument for the prudential rationality of belief in God in the name “Searle’s Wager” may seem questionable. Pascal’s Wager has few supporters.3 There is one significant respect in which Searle’s Wager fares better than Pascal’s. According to an oft-made objection, Pascal’s Wager illicitly restricts alternatives. For example, it fails to account for the possibility of self-effacing Gods who punish belief, to list just one possibility overlooked by Pascal. The alternatives for Searle’s Wager are more clearly binary—one either survives mind-uploading or one does not.

There is another significant difference between the two Wagers. Pascal famously supposes that the rewards for correctly believing in God are infinite—one gets to spend an eternity in the best of all conceivable places. He takes this to imply that any non-zero positive probability of God existing justifies belief—the expected return for any non-zero probability of God’s existence is infinite. The potential rewards and losses from participation in Searle’s Wager are great, but finitely so. If Strong AI is a false view then uploading kills you. Your living, thinking body is replaced by a computer no more capable of thought than is a dead human body. This is a very bad thing for most of us, but it’s not as bad as missing out on an eternity in paradise.

My statement “if there is room for rational disagreement you should not treat the probability of Searle being correct as zero…This is all that the Wager requires” (Agar 2010: 71) cited by Levy is misleading. I should have stipulated reasonable room for rational disagreement. Searle’s Wager requires more than a non-zero probability of death by failed mind-uploading. The finitude of the potential rewards and costs means that there is a probabilistic threshold that potential costs must exceed. Those who walk across pedestrian crossings can justify their actions by making the point that the risk of death is very small indeed and the potential gains from being on the other side of the road are sufficiently large. By analogous reasoning, mind-uploading might be worthwhile if the risks of death are sufficiently small and the potential gains are sufficiently great.

What reason do we have for thinking that the probability of death-by-uploading is sufficiently high to justify refusal? My first piece of evidence is that intelligent, relevantly informed people who have thought long and hard about the issue have arrived at the conclusion that computers not only cannot, but could never think. Searle and like-minded philosophers and computer scientists are not like deniers of the Holocaust or Young Earth creationists whose confidence in their conclusions depends on wilfully ignoring relevant historical or geological evidence. I recommend that believers in Strong AI accept the following principle.

Principle of Epistemic Modesty: suppose others believe differently from you on a matter of fact. They appear to have access to the fact that is as good as yours and they seem not to be motivated by considerations independent of the fact. You should take seriously the possibility they you are wrong and your opponents are right.

Dubious moral or political commitments render Holocaust deniers and Young Earth creationists insensitive to relevant facts. Our responses to their arguments are therefore not properly viewed as part of the quest for truth but rather have the purpose of neutralizing untoward political and social consequences. There is no piece of evidence possessed by defenders of Strong AI that Searle and like-minded opponents are either ignorant of, or wilfully overlook. Truth-seekers should take their arguments seriously.

I suggest that “taking seriously the possibility they you are wrong” entails assigning a non-negligible probability to propositions that you reject turning out to be true. Assigning probabilities to the outcomes of philosophical disputes is not the way philosophy is typically done. One problem arises in the assignment of probabilities to philosophical propositions. The disputes over the truth of falsehood of Strong AI, physicalism, consequentialist theories of morality, and other philosophical issues do not have outcomes that can be inspected as do tosses of coins. It can be worth betting on the outcome of a coin toss because one can readily toss the coin and pronounce winners and losers. It is rare indeed that you might think of placing a bet on a philosophical proposition. You might express the intention to “bet on” Kantian morality but it is unclear under what circumstances you would get to collect on your bet or have to pay out on it. This does not mean that there are no circumstances in which it could be appropriate to bet. I argue that the invention of mind-uploading technologies give believers in Strong AI the opportunity to test their commitment to their philosophical belief by placing a kind of bet.

Here is a thought experiment that seeks to make vivid the probabilistic nature of our commitment to our philosophical beliefs. Suppose an omniscient being were to grant you an opportunity to place a bet on some of your philosophical “certainties.” Any presented proposition that turned out to be true would earn you a small financial reward. Any of your “certainties” which, upon presentation, turned out to be false would earn you instant death. This seems like an excellent deal for any propositions of whose truth you are genuinely certain. I have to confess that I would withhold almost all of my philosophical “certainties.” I feel confident defending physicalism—the view that everything is physical—against other philosophers. There is no doubt that I believe it. But I do not place the probability of its truth at 1. I would be loathe to lose my life on account of a scrap of immaterial stuff that somehow got stuck to a far distant region of space-time.

The Principle of Epistemic Modesty does not entail that we should be so deferential as to exchange our philosophical beliefs for their denials. This is more than “taking seriously the possibility they you are wrong” requires. Advocates of Strong AI can persist in thinking that they have good reasons for holding on to their views. If pressed to, they might assign a high probability to the proposition “Strong AI” is true. But this assignment should leave room for assigning a higher probability to “Strong AI is false” than to the logically possible but vastly improbable proposition that the entire universe was created over a period of a few days some 6,000 years ago, for example.

There is some reason to think that the negations of even strongly held philosophical claims—claims that stand at some distance from empirical verification—may have quite high probabilities. It is a hallmark of philosophical claims that they cannot be straightforwardly empirically verified. The truth of a philosophical claim typically depends on the truth of other philosophical claims. The proposition “Strong AI is true” depends in some way on the truth of other propositions about the mind and about the nature of computers. One of these is physicalism. An implication of physicalism is that there is no non-physical aspect of the human mind. There is no strict incompatibility between non-physicalism and Strong AI. It is possible that the non-physical aspects of human thought will be replicated by an appropriately programmed computer. But if physicalism is false then there is greater scope for Strong AI to be false. If human minds have both physical and non-physical parts then there are more ways in which computers whose programming captures many observable aspects could lack aspects of mind. Discovering the falsehood of physicalism should therefore be seen as reducing the likelihood of Strong AI being a true view.

When philosophers debate one another, they typically grant certain philosophical assumptions. For example, many participants in debates about AI are physicalists. Since physicalism is not at issue, it receives scant attention. But if we are being honest, the probability of physicalism’s being true is not 1. Shared assumptions about the future development of computers cannot also be viewed as having a probability of 1. If there is some as yet undiscovered physical law that limits the improvement of computers they may never become sufficiently powerful to house a human mind. The compound probability of many independently probable events may itself be low. Someone who drives her car under the influence of alcohol just once may count herself unlucky if she is pulled over by the cops. A lifetime of drink-driving dramatically reduces her chances of evading officers of the law.

The preceding paragraphs do not have the purpose of persuading philosophical advocates of strong AI to change their view. They certainly do not constitute an argument against strong AI. Rather their purpose is to show that the probability of strong AI being true is somewhat less than 1. Philosophers should not be so deferential as to exchange their current, apparently reasonable commitment to the truth of Strong AI for the reverse view. They can continue to loudly declare Strong AI to be the truth. If the probability of Strong AI being true is less than 1 but greater than the 0.00000001, we might assign to Young Earth Creationism then what probability should it be assigned? I propose that it is likely to be high enough once we quantify the potential losses and gains from mind-uploading to motivate us not to accept the offer to upload.

It is possible that future enhancements in intelligence will make philosophical quandaries about the mind and computers more tractable. Perhaps our current level of intelligence does not suffice to decisively resolve the question of whether or not computers could ever think. Could cognitive enhancements solve this problem? I think that there is some inductive support for the idea that many of the mysteries about thought and consciousness will survive increases in intelligence. Ancient Greek philosophers were pondering questions about conscious experience over two millennia ago. Twenty-first century philosophers may not be any more intelligent than their Greek counterparts, but they do have access to tools for inspecting the physical bases of thought that are vastly more powerful than those available to Plato. In spite of this, philosophers do not find ancient Greek responses to questions about thought and consciousness the mere historical curiosities that modern scientists find ancient Greek physics and biology. Many of the conundrums of mind seem connected to its essentially subjective nature. There is something about the way our thoughts and experiences appear to us that seems difficult to reconcile with what science tells us about them. It does not matter whether the science in question is Aristotle’s or modern neuroscience.

It’s worth considering the alternative hypotheses in Levy’s reductio argument in light of this discussion. For example, Levy mentions conjectures that cognitive enhancements could interfere with consciousness. There certainly is a non-zero probability that any manner of “brain stimulation” could result in death. But such conjectures are properly placed together with Young Earth creationism and not with Searle’s philosophical investigations of AI. I doubt that they fall within the scope of the Principle of Epistemic Modesty.

3 How bad is death and how good is successful mind-uploading?

The probability that uploading kills you results in death is lower than 1 but certainly higher than 0. Which way one should be depends not only on the probabilities of the alternative outcomes. It also depends on the value we assign to them. Mind-uploading could still be a good bet if potential gains are sufficiently great and potential losses are sufficiently small.

I advance two claims about the values of failed and successful mind-uploading.

(1) Death is likely to be significantly worse outcome for the members of societies that possess mind-uploading technologies than it is for us.

(2) The members of societies that possess mind-uploading technologies will place greater value on enhancements compatible with the survival of their biological brains and bodies than they will on enhancements enabled by uploading.

3.1 The costs of failed mind-uploading

Levy argues that I mischaracterize what is lost though failed mind-uploading. I call this death. But it is possible that the failure to transmit mind from brain to machine does not result in death. Levy points to theories of personal identity that would permit a form of mindless survival. Among these is “a suitably modified organism view”.4 The organism view would require substantial revision in order to countenance survival by failed mind-uploading. It holds that we are essentially biological organisms—entities which are deliberately destroyed in the form of mind-uploading under consideration. Nevertheless, we should agree with Levy that failed mind-uploading is not necessarily fatal. It might result in a form of mindless survival. The view that we are essentially human organisms countenances this possibility. It counts brains, like hearts, as very important parts of human organisms. It is possible, though unlikely, for a human organism to survive the loss of a brain just as it is possible, though unlikely, for a human organism to survive the loss of a heart. A further possibility is that mind-uploading might preserve our capacity for intentional states while destroying our capacity for consciousness. These possibilities may entail the preservation of our identities. But they are still sufficiently bad potential outcomes to motivate caution. Even if inessential to one’s identity, conscious mental states are preconditions for many things that we value.

It is not hard to think of people who would be undeterred by the risk of death (or an equivalent loss) from failed mind-uploading. A person about to expire from cancer can choose between certain death from disease and a merely possible death-by uploading. Someone diagnosed with an incurable neurodegenerative disease might be unfazed by the risk of mindless or unconscious survival.

We should not mistake our present circumstances for those of people presented with the option of uploading. Candidates for uploading are unlikely to find themselves stricken with terminal cancer or Alzheimer’s and prepared to give the procedure a go. If the gerontologist Aubrey de Grey is right about the near future of our species, we could soon become ageless, immunized against cancer, heart failure, or any of the other diseases that might incline us to disregard caution about uploading.5 He claims that there is a good chance that people alive today will achieve millennial life spans. They will do this by systematically fixing up their brains and bodies, i.e. without recourse to uploading.

My argument here does not depend on the veracity of this and others of de Grey’s claims. Rather, it relies on the achievability of his plan relative to that of uploading. De Grey’s immediate goal is something he calls Longevity Escape Velocity (LEV). LEV does not require full and final fixes for heart disease, Alzheimer’s, cancer, and the other conditions that currently shorten human lives. What is essential is that we make appreciable and consistent progress against them. LEV will arrive when new therapies reliably add more years onto our lives than the time it takes to research them. According to de Grey, anyone who is alive at this time and has access to the full range of therapies should expect a millennial life span. New therapies will keep on coming, granting additional years faster than living consumes them. My point here requires only that LEV is likely to arrive sooner than uploading. Uploading requires not only a completed neuroscience, total understanding of what is currently the least well-understood part of the human body, but also perfect knowledge of how to convert every relevant aspect of the brain’s functioning into electronic computation. It is therefore likely to be harder to achieve than LEV.

This guess about future technological development could be wrong. Suppose we perfect mind-uploading well in advance of achieving longevity escape velocity. Then, it is possible that candidates for uploading may be facing slow deaths from untreatable diseases. They could be diagnosed with terminal cancer and properly view themselves has having nothing or very little to lose from uploading.6 In these circumstances, uploading could be prudentially rational for some people. For those who are not terminally ill, it will still make sense to direct their hopes and expectations toward the relatively risk-free methods of life extension and quality of life improvement.

3.2 The gains from successful mind-uploading

The citizens of societies that possess mind-uploading technologies are likely to take the risk of death or the loss of their conscious minds from failed mind-uploading more seriously than we do. But they might find such risks acceptable if counterbalanced by very considerable benefits from successful mind-uploading.

According to Kurzweil, trading neurons and synapses for more powerful media of thought will lead to dramatic enhancements. He predicts that very soon “the non-biological portion of our intelligence will be trillions of trillions of times more powerful than unaided human intelligence.” (Kurzweil 2005: 9) Once freed from biology our intellects will enlarge at an exponential rate. This truly dramatic increase will be enabled by our learning how to exploit the computational potential of matter and energy. (Kurzweil 2005: 29) Our minds will cannibalize ever-increasing quantities of the previously inanimate universe, reconfiguring it to enhance our powers of thought. According to Kurzweil, “[u]ltimately, the entire universe will become saturated with our intelligence…. We will determine our own fate rather than having it determined by the current ‘dumb’ simple, machinelike forces that rule celestial mechanics.” (Kurzweil 2005: 29) According to Kurzweil, this is all happening surprisingly soon. He nominates 2045 as the year of the Singularity an event we will celebrate by creating minds “about one billion times more powerful than all human intelligence today.” (Kurzweil 2005: 136).

Questions of personal identity arise in connection with such dramatic cognitive enhancements. It is reasonable to speculate about whether any human being could survive a process such as the one that described by Kurzweil. Suppose, however, that one does survive. There is one sense in which these are enhancements of our cognitive powers. They dramatically extend our memories and increase the calculations we are able to perform per second. There is another sense in which their status as enhancements is questionable. Do they have an enabling effect on our current values? Are they likely to set back or to promote the things we most desire?7

These doubts are less significant in respect of more moderate enhancements, those that bring none of the risks of mind-uploading. They can enlarge our intellects in line with our values. They will enable us to improve our skills at playing winning chess endgames and to more easily learn foreign languages, difficult things that some of us actually desire to do.

Levy makes the point that “it can be rational to invest in future goods, even when we fail to desire those future goods at the time of the investment. It is sufficient that we believe that there is a high (enough) probability that we will come to desire those goods.” A young person may lack any desire to play bridge. He observes that many retired people enjoy bridge and believes that he will someday reach retirement age. He might therefore be motivated to at least consider learning the game. Although he lacks any current desire to play bridge, it is implied by desires that he does currently have—the desire for a happy retirement. But there is a difference between making provision for a possible future desire that falls within the scope of a higher-order desire and making provision for a possible way that does not correspond with any current desires that you currently have, and can be avoided.

I doubt that the outlandish possibilities enabled by machine minds as big as (and bigger than) solar systems are implied by our current desires. We can evade any responsibility for providing for these futures by refusing to mind-upload and ensuring that any enhancements promote our values rather than setting them back. We would be like the person who evades any responsibility to provide for a heroin-addicted future by not getting addicted to the drug in the first place.

Perhaps my speculations about the values of humans confronted with the prospect of mind-uploading are mistaken. Levy accuses me of oscillating “between two different perspectives: the perspective of those who face the choice of uploading sometime in the future (at t1), and our current perspective (at t).” Many humans in the early years of twenty-first century will feel somewhat alienated by the quantum of enhancement enabled by mind-uploading. It is possible, however, that our future selves or descendants who actually possess mind-uploading technology will feel differently. We or they may feel immense frustration at being unable to do some of the things possible for intellects “about one billion times more powerful than all human intelligence today.” I choose to treat our present values as informative about the values of our modestly enhanced future selves. Supposing that they will change in ways that are more accepting of mind-uploading seems analogous to opposing present-day environmentalists on the grounds that the values of people of the future may be different from our own—they may actually prefer ravaged natural ecosystems! The difference between the t and t1 perspectives is likely to be small compared with the difference between either of these perspectives and that of a being who has mind-uploaded, and as a consequence, has an electronic (or a post-electronic) brain as big as a solar system. We at t view this transformation as inconsistent with our values. I predict that our moderately enhanced future selves and descendants at t1 will feel similarly.

4 Conclusion

I conclude that those presented with the option of mind-uploading are unlikely to deem it prudentially rational to do so. They should recognize that there is a significant chance that the procedure will be fatal. Furthermore, they have much to lose should mind-uploading fail and little to gain should it succeed.

Footnotes
1

For recent advocacy of mind-uploading, see Kurzweil (2005) and Sandberg and Bostrom (2008).

 
2

Kurzweil (2005) argues that technological improvement is exponential.

 
3

See Hajek (2008) for a very useful account of Pascal’s Wager.

 
4

The view that we are essentially human organisms is defended in Olson (1997).

 
5

For extensive presentation of this view, see de Grey and Rae (2007).

 
6

Thanks to Mark Walker for making this point.

 
7

These possibilities are explored in chapter 15 of Nussbaum (1992).

 

Acknowledgments

I am grateful to Neil Levy and Mark Walker for very helpful discussion of this paper.

Copyright information

© Springer-Verlag London Limited 2011