Rational Universal Benevolence: Simpler, Safer, and Wiser Than “Friendly AI”

  • Mark Waser
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6830)


Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?


Artificial General Intelligence (AGI) Safe AI Friendly AI (FAI) Coherent Extrapolated Volition (CEV) Rational Universal Benevolence (RUB) 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Yudkowsky, E.: Artificial Intelligence as a Positive and Negative Factor in Global Risk. In: Bostrom, N., Cirkovic, M. (eds.) Global Catastrophic Risks, pp. 308–343. Oxford University Press Inc., New York (2008)Google Scholar
  2. 2.
    Omohundro, S.: The Basic AI Drives. In: Proceedings of the First Conference on Artificial General Intelligence, pp. 483–492. IOS Press, Amsterdam (2008)Google Scholar
  3. 3.
    Fox, J., Shulman, C.: Superintelligence Does Not Imply Benevolence. In: Mainzer, K. (ed.) ECAP 2010: VIII European Conference on Computing and Philosophy, pp. 456–462 (2010)Google Scholar
  4. 4.
    Sotala, K.: From Mostly Harmless to Civilization-Threatening: Pathways to Dangerous Artificial General Intelligences. In: Mainzer, K. (ed.) ECAP 2010: VIII European Conference on Computing and Philosophy, pp. 443–450 (2010)Google Scholar
  5. 5.
    Yudkowsky, E.: Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures, http://singinst.org/CFAI.html
  6. 6.
    Yudkowsky, E.: Coherent Extrapolated Volition, http://www.singinst.org/upload/CEV.html
  7. 7.
  8. 8.
    Gauthier, D.: Morals by Agreement. Oxford University Press, Oxford (1986)Google Scholar
  9. 9.
    Haidt, J., Kesebir, S.: Morality. In: Fiske, S., Gilbert, D., Lindzey, G. (eds.) Handbook of Social Psychology, 5th edn., pp. 797–832. Wiley, Hoboken (2010)Google Scholar
  10. 10.
    de Waal, F.: Primates and Philosophers: How Morality Evolved. Princeton University Press, Princeton (2006)Google Scholar
  11. 11.
    Trivers, R.: Deceit and self-deception. In: Robinson, M., Tiger, L. (eds.) Man and Beast Revisited. Smithsonian Press, Washington, DC (1991)Google Scholar
  12. 12.
    Haidt, J.: The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108, 813–814 (2001)CrossRefGoogle Scholar
  13. 13.
    Hauser, M., Cushman, F., Young, L., Kang-Xing, R., Mikhail, J.: A Dissociation Between Moral Judgments and Justifications. Mind & Language 22(1), 1–27 (2007)CrossRefGoogle Scholar
  14. 14.
    McLean, R., Fuentes-Hernandez, A., Greig, D., Hurst, L., Gudelj, I.: A Mixture of “Cheats” and “Co-operators” Can Enable Maximal Group Benefit. PLoS Biology 8(9) (2010)Google Scholar
  15. 15.
    Wright, R.: Nonzero: The Logic of Human Destiny. Pantheon, New York (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Mark Waser
    • 1
  1. 1.Books InternationalDullesUSA

Personalised recommendations