Skip to main content

The Biointelligence Explosion

How Recursively Self-Improving Organic Robots will Modify their Own Source Code and Bootstrap Our Way to Full-Spectrum Superintelligence

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

This essay explores how recursively self-improving organic robots will modify their own genetic source code and bootstrap our way to full-spectrum superintelligence. Starting with individual genes, then clusters of genes, and eventually hundreds of genes and alternative splice variants, tomorrow’s biohackers will exploit “narrow” AI to debug human source code in a positive feedback loop of mutual enhancement. Genetically enriched humans can potentially abolish aging and disease; recalibrate the hedonic treadmill to enjoy gradients of lifelong bliss, and phase out the biology of suffering throughout the living world.

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us…. Soon we must look deep within ourselves and decide what we wish to become.

Edward O. Wilson

Consilience, The Unity of Knowledge (1999)

I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.

Freeman Dyson

New York Review of Books (July 19 2007)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Baker, S. (2011). Final Jeopardy: man vs. machine and the quest to know everything. (Houghton Mifflin Harcourt).

    Google Scholar 

  • Ball, P. (2011). Physics of life: The dawn of quantum biology. Nature, 474(2011), 272–274.

    Article  Google Scholar 

  • Banissy, M., et al. (2009). Prevalence, characteristics and a neurocognitive model of mirror-touch synaesthesia. Experimental Brain Research, 198(2–3), 261–272. doi:10.1007/s00221-009-1810-9.

    Article  Google Scholar 

  • Barkow, J., Cosimdes, L., & Tooby, J. (Eds.). (1992). The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press.

    Google Scholar 

  • Baron-Cohen, S. (1995). Mindblindness: an essay on autism and theory of mind (MIT Press/Bradford Books).

    Google Scholar 

  • Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., Clubley, E. (2001). The autism-spectrum quotient (AQ): evidence from Asperger syndrome/high functioning autism, males and females, scientists and mathematicians. Journal of Autism Development Disorders, 31(1): 5–17. doi:10.1023/A:1005653411471. PMID 11439754.

    Google Scholar 

  • Baron-Cohen, S. (2001). Autism spectrum questionnaire. (Cambridge: Autism Research Centre, University of Cambridge) http://psychology-tools.com/autism-spectrum-quotient/.

  • Benatar, D. (2006). Better Never to Have Been: The Harm of Coming Into Existence. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Bentham, J. (1789). An introduction to the principles of morals and legislation. Oxford: Clarendon Press. Reprint.

    Google Scholar 

  • Berridge, K. C., & Kringelbach, M. L. (Eds.). (2010). Pleasures of the Brain. Oxford: Oxford University Press.

    Google Scholar 

  • Bostrom, N. (2002). Existential risks: analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9.

    Google Scholar 

  • Boukany, P. E., et al. (2011). Nanochannel electroporation delivers precise amounts of biomolecules into living cells. Nature Nanotechnology, 6(2011), 74.

    Google Scholar 

  • Brickman, P., Coates, D., & Janoff-Bulman, R. (1978). Lottery winners and accident victims: is happiness relative? Journal of Personal Society Psychology, 36(8), 917–927. 7–754.

    Article  Google Scholar 

  • Brooks, R. (1991). Intelligence without representation. Artificial Intelligence, 47(1–3), 139–159. doi:10.1016/0004-3702(91)90053-M.

    Article  Google Scholar 

  • Buss, D. (1997). “Evolutionary Psychology: The New Science of the Mind”. (Allyn & Bacon).

    Google Scholar 

  • Byrne, R., & Whiten, A. (1988). Machiavellian intelligence. Oxford: Oxford University Press.

    Google Scholar 

  • Carroll, J. B. (1993). Human cognitive abilities: a survey of factor-analytic studies. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Chalmers, D. J. (2010). The singularity: a philosophical analysis. Journal of Consciousness Studies, 17(9), 7–65.

    Google Scholar 

  • Chalmers, D. J. (1995). Facing up to the hard problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

    MathSciNet  Google Scholar 

  • Churchland, P. (1989). A neurocomputational perspective: the nature of mind and the structure of science. Cambridge: MIT Press.

    Google Scholar 

  • Cialdini, R. B. (1987). Empathy-based helping: is it selflessly or selfishly motivated? Journal of Personality and Social Psychology, 52(4), 749–758.

    Article  Google Scholar 

  • Clark, A. (2008). Supersizing the Mind: Embodiment, Action, and Cognitive Extension. USA: Oxford University Press.

    Google Scholar 

  • Cochran, G., Harpending, H. (2009). The 10,000 Year Explosion: How Civilization Accelerated Human Evolution New York: Basic Books.

    Google Scholar 

  • Cochran, G., Hardy, J., & Harpending, H. (2006). Natural history of Ashkenazi intelligence. Journal of Biosocial Science, 38(5), 659–693.

    Article  Google Scholar 

  • Cohn, N. (1957). The pursuit of the millennium: revolutionary millenarians and mystical anarchists of the middle ages (Pimlico).

    Google Scholar 

  • Dawkins, R. (1976). The Selfish Gene. New York City: Oxford University Press.

    Google Scholar 

  • de Garis, H. (2005). The Artilect War: Cosmists vs. Terrans: A bitter controversy concerning whether humanity should build godlike massively intelligent machines. ETC. Publications, pp. 254. ISBN 978-0882801537.

    Google Scholar 

  • de Grey, A. (2007). Ending aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime. New York: St. Martin’s Press.

    Google Scholar 

  • Delgado, J. (1969). Physical control of the mind: Toward a psychocivilized society. New York: Harper and Row.

    Google Scholar 

  • Dennett, D. (1987). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Deutsch, D. (1997). The fabric of reality. Harmondsworth: Penguin.

    Google Scholar 

  • Deutsch, D. (2011). The beginning of infinity. Harmondsworth: Penguin.

    Google Scholar 

  • Drexler, E. (1986). Engines of creation: The coming era of nanotechnology. New York: Anchor Press/Doubleday.

    Google Scholar 

  • Dyson, G. (2012). Turing’s cathedral: The origins of the digital universe. London: Allen Lane.

    Google Scholar 

  • Everett, H. (1973). The theory of the universal wavefunction. Manuscript (1955), pp 3–140 In B. DeWitt, & R. N. Graham, (Eds.), The many-worlds interpretation of quantum mechanics. Princeton series in physics. Princeton, Princeton University Press. ISBN 0-691-08131-X.

    Google Scholar 

  • Francione, G. (2006). Taking sentience seriously. Journal of Animal Law & Ethics 1.

    Google Scholar 

  • Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

    Google Scholar 

  • Goertzel, B. (2006). The hidden pattern: A patternist philosophy of mind. (Brown Walker Press).

    Google Scholar 

  • Good, I. J. (1965). Speculations concerning the first ultraintelligent machine. In L. F. Alt & M. Rubinoff (Eds.), Advances in computers (pp. 31–88). London: Academic Press.

    Google Scholar 

  • Gunderson, K. (1985). Mentality and machines. Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Hagan, S., Hameroff, S., & Tuszynski, J. (2002). Quantum computation in brain microtubules? Decoherence and biological feasibility. Physical Reviews, E65, 061901.

    Google Scholar 

  • Haidt, J. (2012). THE righteous mind: Why good people are divided by politics and religion. NY: Pantheon.

    Google Scholar 

  • Hameroff, S. (2006). Consciousness, neurobiology and quantum mechanics. In The emerging physics of consciousness, J. Tuszynski (Ed.) (Springer).

    Google Scholar 

  • Harris, S. (2010). The moral landscape: How science can determine human values. NY: Free Press.

    Google Scholar 

  • Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge: MIT Press.

    Google Scholar 

  • Holland, J. (2001). Ecstasy: The complete guide: A comprehensive look at the risks and benefits of mdma. (Park Street Press).

    Google Scholar 

  • Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Hutter, M. (2010). Universal artificial intelligence: Sequential decisions based on algorithmic probability. New York: Springer.

    Google Scholar 

  • Hutter, M. (2012). Can intelligence explode? Journal of Consciousness Studies, 19, 1–2.

    Google Scholar 

  • Huxley, A. (1932). Brave New World. London: Chatto and Windus.

    Google Scholar 

  • Huxley, A. (1954). Doors of perception and heaven and hell. New York: Harper & Brothers.

    Google Scholar 

  • Kahneman, D. (2011). Thinking, fast and slow. Straus and Giroux: Farrar.

    Google Scholar 

  • Kant, I. (1781). Critique of pure reason. In P. Guyer & A. Wood. Cambridge: Cambridge University Press, 1997.

    Google Scholar 

  • Koch, C. (2004). The Quest for Consciousness: a Neurobiological Approach. Roberts & Co..

    Google Scholar 

  • Kurzweil, R. (2005). The singularity is near. New York: Viking.

    Google Scholar 

  • Kurzweil, R. (1990). The age of intelligent machines. Cambridge: MIT Press

    Google Scholar 

  • Kurzweil, R. (1998). The age of spiritual machines. New York: Viking.

    Google Scholar 

  • Langdon, W., Poli, R. (2002). Foundations of genetic programming. New York: Springe).

    Google Scholar 

  • Lee, H. J., Macbeth, A. H., Pagani, J. H., Young, W. S. (2009). Oxytocin: The great facilitator of life. Progress in Neurobiology 88 (2): 127–51. doi:10.1016/j.pneurobio.2009.04.001. PMC 2689929. PMID 19482229.

  • Legg, S., Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds & Machines, 17(4), pp. 391–444.

    Google Scholar 

  • Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64, 354–361.

    Google Scholar 

  • Litt, A., et al. (2006). Is the Brain a Quantum Computer? Cognitive Science, 20, 1–11.

    Google Scholar 

  • Lloyd, S. (2002). Computational capacity of the universe. Physical Review Letters 88(23): 237901. arXiv:quant-ph/0110141. Bibcode 2002PhRvL..88w7901L.

    Google Scholar 

  • Lockwood, L. (1989). Mind, brain, and the quantum. Oxford: Oxford University Press.

    Google Scholar 

  • Mackie, JL. (1991). Ethics: Inventing right and wrong. Harmondsworth: Penguin.

    Google Scholar 

  • Markram, H. (2006). The blue brain project. Nature Reviews Neuroscience, 7:153–160. PMID 16429124.

    Google Scholar 

  • Merricks, T. (2001). Objects and persons. Oxford: Oxford University Press.

    Google Scholar 

  • Minsky, M. (1987). The society of mind. New York: Simon and Schuster.

    Google Scholar 

  • Moravec, H. (1990). Mind children: The future of robot and human intelligence. Cambridge: Harvard University Press.

    Google Scholar 

  • Nagel, T. (1974). What is it Like to Be a Bat? Philosophical Review, 83, 435–450.

    Article  Google Scholar 

  • Nagel, T. (1986). The view from nowhwere. Oxford: Oxford University Press.

    Google Scholar 

  • Omohundro, S. (2007). The Nature of Self-Improving Artificial Intelligence. San Francisco: Singularity Summit.

    Google Scholar 

  • Parfit, D. (1984). Reasons and persons. Oxford: Oxford University Press.

    Google Scholar 

  • Pearce, D. (1995). The hedonistic imperative http://hedweb.com.

  • Pellissier, H. (2011) Women-only leadership: Would it prevent war? http://ieet.org/index.php/IEET/more/4576.

  • Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. Cambridge: MIT Press.

    Google Scholar 

  • Peterson, D., Wrangham, R. (1997). Demonic males: Apes and the origins of human violence. Mariner Books.

    Google Scholar 

  • Pinker, S. (2011). The better angels of our nature: Why violence has declined. New York: Viking.

    Google Scholar 

  • Rees, M. (2003). Our final hour: A scientist’s warning: How terror, error, and environmental disaster threaten humankind’s future in this century—on earth and beyond. New York: Basic Books.

    Google Scholar 

  • Reimann F, et al. (2010). Pain perception is altered by a nucleotide polymorphism in SCN9A. Proc Natl Acad Sci USA. 107(11):5148–5153 (2010 Mar 16).

    Google Scholar 

  • Rescher, N. (1974). Conceptual idealism. Oxford: Blackwell Publishers.

    Google Scholar 

  • Revonsuo, A. (2005). Inner presence: Consciousness as a biological phenomenon. Cambridge: MIT Press.

    Google Scholar 

  • Revonsuo, A., & Newman, J. (1999). Binding and consciousness. Consciousness and Cognition, 8, 123–127.

    Article  Google Scholar 

  • Riddoch, M. J., & Humphreys, G. W. (2004). Object identification in simultanagnosia: When wholes are not the sum of their parts. Cognitive Neuropsychology, 21(2–4), 423–441.

    Article  Google Scholar 

  • Rumelhart, D. E., McClelland, J. L., & the PDP Research Group. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volume 1: Foundations. Cambridge: MIT Press.

    Google Scholar 

  • Russell, B. (1948). Human knowledge: Its scope and limits. London: George Allen & Unwin.

    Google Scholar 

  • Sandberg, A., Bostrom, N. (2008). Whole brain emulation: A roadmap. Technical report 2008-3.

    Google Scholar 

  • Saunders, S., Barrett, J., Kent, A., Wallace, D. (2010). Many worlds: Everett, quantum theory, and reality. Oxford: Oxford University Press.

    Google Scholar 

  • Schlaepfer, T. E., & Fins, J. J. (2012). How happy is too happy? Euphoria, neuroethics and deep brain stimulation of the nucleus accumbens. The American Journal of Bioethics, 3, 30–36.

    Google Scholar 

  • Schmidhuber, J. (2012). Philosophers & Futurists, Catch Up! Response to the singularity. Journal of Consciousness Studies, 19: 1–2, pp. 173–182.

    Google Scholar 

  • Seager, W. (1999). Theories of consciousness. London: Routledge.

    Google Scholar 

  • Seager, W. (2006). The ‘intrinsic nature’ argument for panpsychism. Journal of Consciousness Studies, 13(10–11), 129–145.

    Google Scholar 

  • Sherman, W., Craig A., (2002). Understanding virtual reality: Interface, application, and design. Los Altos: Morgan Kaufmann.

    Google Scholar 

  • Shulgin, A. (1995). PiHKAL: A chemical love story. Berkeley: Transform Press.

    Google Scholar 

  • Shulgin, A. (1997). TiHKAL: The continuation. Berkeley: Transform Press.

    Google Scholar 

  • Shulgin, A. (2011). The shulgin index vol 1: Psychedelic phenethylamines and related compounds. Berkeley: Transform Press.

    Google Scholar 

  • Shulman, C., Sandberg, A. (2010). Implications of a software-limited singularity. Proceedings of the European Conference of Computing and Philosophy.

    Google Scholar 

  • Sidgwick, H. (1907). The methods of ethics. Indianapolis: Hackett, seventh edition, 1981, 1-4.

    Google Scholar 

  • Singer, P. (1995). Animal liberation: A new ethics for our treatment of animals. New York: Random House.

    Google Scholar 

  • Singer, P. (1981). The expanding circle: Ethics and sociobiology. New York: Farrar, Straus and Giroux.

    Google Scholar 

  • Smart, JM. (2008–2011) Evo Devo Universe? A Framework for Speculations on Cosmic Culture. In Cosmos and culture: Cultural evolution in a cosmic context. J. S. Dick, M. L. Lupisella (Eds.), Govt Printing Office, NASA SP-2009-4802, Wash., D.C., 2009, pp. 201–295.

    Google Scholar 

  • Stock, G. (2002). Redesigning humans: Our inevitable genetic future. Boston: Houghton Mifflin Harcourt.

    Google Scholar 

  • Strawson G., et al. (2006). Consciousness and its place in nature: Does physicalism entail panpsychism? Imprint Academic.

    Google Scholar 

  • Tegmark, M. (2000). Importance of quantum decoherence in brain processes. Physical Review E, 61(4), 4194–4206. doi:10.1103/PhysRevE.61.4194.

    Article  Google Scholar 

  • Tsien, J. et al., (1999). Genetic enhancement of learning and memory in mice. Nature 401, 63–69 (2 September 1999) | doi:10.1038/43432.

    Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  MathSciNet  Google Scholar 

  • Vinge, V. (1993). The coming technological singularity. New Whole Earth LLC: Whole Earth Review.

    Google Scholar 

  • Vitiello, G. (2001). My double unveiled; advances in consciousness. Amsterdam: John Benjamins.

    Google Scholar 

  • Waal, F. (2000). Chimpanzee politics: Power and sex among apes”. Maryland: Johns Hopkins University Press.

    Google Scholar 

  • Wallace, D. (2012). The Emergent multiverse: Quantum theory according to the Everett interpretation. Oxford: Oxford University Press.

    Book  MATH  Google Scholar 

  • Welty, G. (1970). The history of the prediction paradox, presented at the Annual Meeting of the International Society for the History of the Behavioral and Social Sciences, Akron, OH (May 10, 1970), Wright State University Dayton, OH 45435 USA. http://www.wright.edu/~gordon.welty/Prediction_70.htm.

  • Wohlsen, M. (2011): Biopunk: DIY scientists hack the software of life. London: Current.

    Google Scholar 

  • Yudkowsky, E. (2007). Three major singularity schools. http://yudkowsky.net/singularity/schools.

  • Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In Bostrom, Nick and Cirkovic, Milan M. (Eds.). Global catastrophic risks. pp. 308–345 (Oxford: Oxford University Press).

    Google Scholar 

  • Zeki, S. (1991). Cerebral akinetopsia (visual motion blindness): A review. Brain, 114, 811–824. doi:10.1093/brain/114.2.811.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Pearce .

Editor information

Editors and Affiliations

Illah R. Nourbakhsh on Pearce’s “The Biointelligence Explosion”

Illah R. Nourbakhsh on Pearce’s “The Biointelligence Explosion”

The Optimism of Discontinuity

In The Biointelligence Explosion, David Pearce launches a new volley in the epic, pitched battle of today’s futurist legions. The question of this age is: machine or man? And neither machine nor man resembles the modern-day variety. According to the Singularity’s version of foreshadowed reality, our successors are nothing like a simulacrum of human intelligence; instead they vault beyond humanity along every dimension, achieving heights of intelligence, empathy, creativity, awareness and immortality that strain the very definitions of these words as they stand today. Whether these super-machines embody our unnatural, disruptive posthuman evolution, displacing and dismissing our organic children, or whether they melt our essences into their circuitry by harvesting our consciousnesses and qualia like so much wheat germ, the core ethic of the machine disciples is that the future will privilege digital machines over carbon-based, analog beings.

Pearce sets up an antihero to the artificial superintelligence scenario, proposing that our wetware will shortly become so well understood, and so completely modifiable, that personal bio-hacking will collapse the very act of procreation into a dizzying tribute to the ego. Instead of producing children as our legacy, we will modify our own selves, leaving natural selection in the dust by changing our personal genetic makeup in the most extremely personal form of creative hacking imaginable. But just like the AI singularitarians, Pearce dreams of a future in which the new and its ancestor are unrecognizably different. Regular humans have depression, poor tolerance for drugs, and, let’s face it, mediocre social, emotional and technical intelligence. Full-Spectrum Superintelligences will have perfect limbic mood control, infinite self-inflicted hijacking of chemical pathways, and so much intelligence as to achieve omniscience bordering on Godliness.

The Singularity proponents have a fundamentalist optimism born, as in all religions, of something that cannot be proven or disproven rationally: faith. In their case, they have undying faith in a future discontinuity, the likes of which the computational world has never seen. After all, as Pearce points out, today’s computers have not shown even a smattering of consciousness, and so the ancestry of the intelligent machine, a machine so fantastically powerful that it can eventually invent the superintelligent machine, is so far an utter no-show. But this is alright if we can believe that with Moore’s Law comes a new golden chalice: a point of no return, when the progress of Artificial Intelligence self-reinforces, finally, and takes off like an airplane breaking ground contact and suddenly shooting upward in the air: a discontinuity that solves all the unsolvable problems. No measurement of AI’s effectiveness before the discontinuity matters from within this world view; the future depends only on the shape of a curve, and eventually all the rules will change when we hit a sudden bend. That a technical sub-field can depend so fully, not on early markers of success, but on the promise of an unknown future disruption, speaks volumes about the discouraging state of Artificial Intelligence today. When the best recent marker of AI, IBM’s Watson, wins peculiarly by responding to a circuit-driven light in 8 ms, obviating the chances of humans who must look at a light and depend on neural pathways orders of magnitude slower, then AI Singularity cannot yet find a machine prophet.

Pearce is also an optimist, presenting an alternative view that extrapolates from the mile marker of yet another discontinuity: when hacker-dom successfully turns its tools inward, open-sourcing and bio-hacking their own selves to create recursively improving bio-hackers that rapidly morph away from mere human and into transcendental Superintelligence. The discontinuity is entirely different from the AI Singularity, and yet it depends just as much on a computational mini-singularity. Computers would need to provide the simulation infrastructure to enable bio-hackers to visualize and test candidate self-modifications. Whole versions of human-YACC and human-VMWare would need to compile and run entire human architectures in dynamic, simulated worlds to see just what behaviour will ensue when Me is replaced by Me-2.0. This demands a level of modelling, analog simulation and systems processing that depend on just as much of a discontinuity as the entire voyage. And then a miracle happens becomes almost cliché when every technical obstacle to be surmounted is not a mountain, but a hyperplane of unknown dimensionality!

But then there is the hairy underbelly of open-source genetics, namely that of systems engineering and open-source programming in general. As systems become more complex, Quality Assurance (QA) becomes oxymoronic because tests fail to exhaustively explore the state-space of possibilities. The Toyota Prius brake failures were not caught by engineers whose very job is to be absolutely sure that brakes never, ever fail, because just the right resonant frequency, combined with a hybrid braking architecture, combined with just the right accelerometer architecture and firmware, can yield a one-in-a million rarity a handful of times, literally. The logistical tail of complexity is a massive headache in the regime of QA, and this bodes poorly for open-sourced hacking of human systems, which dwarf the complexity of Toyota Prius exponentially. IDE’s for bio-hacking; debuggers that can isolate part of your brain so that you can debug a nasty problem without losing consciousness (Game Over!); version control systems and repositories so that, in a panic, you can return your genomic identity to a most recent stable state- all of these tools will be needed, and we will of course be financially enslaved to the corporations that provide these self-modification tools. Will a company, let’s call it HumanSoft, provide a hefty discount on its insertion vector applications if you agree to do some advertising—your compiled genome always drinks Virgil’s Root Beer at parties, espousing its combination of Sweet Birch and Molasses? Will you upgrade to HumanSoft’s newest IDE because it introduces forked compiling—now you can run two mini-me’s in one body and switch between them every 5 s by reprogramming the brain’s neural pathways.

Perhaps most disquieting is the law of unintended consequences, otherwise known as robotic compounding. In the 1980s, roboticists thought that they could build robots bottom-up, creating low-level behaviours, testing and locking them in, then adding higher-level behaviours until, eventually, human-level intelligence flowed seamlessly from the machine. The problem was that the second level induced errors in how level one functioned, and it took unanticipated debugging effort to get level one working with level two. By the time a roboticist reaches level four, the number of side effects overwhelms the original engineering effort completely, and funding dries up before success can be had. Once we begin bio-hacking, we are sure to discover side effects that the best simulators will fail to recognize unless they are equal in fidelity to the real-world. After how many major revisions will we discover that all our hacking time is spent trying to undo unintended consequences rather than optimizing desired new features? This is not a story of discontinuity, unfortunately, but the gradual build-up of messy, complicated baggage that gums up the works gradually and eventually becomes a singular centre of attention.

We may just discover that the Singularity, whether it gives rise to Full-Spectrum Superintelligence or to an Artificial Superintelligence, surfaces an entire stable of mediocre attempts long before something of real value is even conceivable. Just how many generations of mediocrity will we need to bridge and at what cost, to reach the discontinuity that is an existential matter of faith?

There is one easy answer here, at once richly appropriate and absurd. Pearce proposes that emotional self-control has one of the most profound consequences on our humanity, for we can make ourselves permanently happy. Learn to control the limbic system fully, and then bio-hackers can hack their way into enforced sensory happiness- indeed, even modalities of happiness that effervesce beyond anything our non drug-induced dreams can requisition today. Best of all, we could program ourselves for maximal happiness even if Me-2.0 is mediocre and buggy. Of course, this level of human chemical pathway control suggests a level of maturity that pharmaceutical companies dream about today, but if it is truly possible to obtain permanent and profound happiness all-around, then of course we lose both the condition and state of happiness. It becomes the drudgery that is a fact of life.

Finally, let us return to one significant commonality between the two hypotheses: they both demand that technology provide the ultimate modelling and simulation engine: I call it the Everything Engine. The Everything Engine is critical to AI because computers must reason, fully, about future implications of all state sets and actions. The Everything Engine is also at the heart of any IDE you would wish to use when hacking your genome: you need to model and generate evidence that your proposed personal modification yields a better you rather than a buggier you. But today, the Everything Engine is Unobtanium, and we know that incremental progress on computation speed will not produce it. We need a discontinuity in computational trends in order to arrive at the Everything Engine. Pearce is right when he states that the two meta-narratives of Singularity are not mutually exclusive. In fact, they are conjoined at the hip; for, if their faith in a future discontinuity proves false, then we might just need infinity of years to reach either Nirvana. And if the discontinuity arrives soon, then as Pearce points out, we will all be too busy inventing the future or evading the future to predict the future.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Pearce, D. (2012). The Biointelligence Explosion. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics