, Volume 33, Issue 4, pp 565–572 | Cite as

Reconciliation between factions focused on near-term and long-term artificial intelligence

  • Seth D. BaumEmail author
Open Forum


Artificial intelligence (AI) experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons (as found in the traditional norms of computer science) and a “societalist faction” that seeks to develop AI for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: (1) advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; (2) technical research on how to make any AI more beneficial to society; and (3) policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI.


Artificial intelligence Near-term artificial intelligence Long-term artificial intelligence Societal impacts of artificial intelligence Artificial general intelligence Artificial superintelligence 


  1. Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D (2016). Concrete problems in AI safety. arXiv:1606.06565Google Scholar
  2. Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublicaGoogle Scholar
  3. Arkin RC (2009) Ethical robots in warfare. IEEE Technol Soc Mag 28(1):30–33CrossRefGoogle Scholar
  4. Baum SD (2015) The far future argument for confronting catastrophic threats to humanity: practical significance and alternatives. Futures 72:86–96CrossRefGoogle Scholar
  5. Bohannon J (2015) Fears of an AI pioneer. Science 349(6245):252MathSciNetCrossRefGoogle Scholar
  6. Bostrom N (2003) Astronomical waste: the opportunity cost of delayed technological development. Utilitas 15(3):308–314CrossRefGoogle Scholar
  7. Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  8. Calo R (2011) Open robotics. Md Law Rev 70(3):571–613Google Scholar
  9. Conn A (2016a) The White House considers the future of AI. Future of Life InstituteGoogle Scholar
  10. Conn A (2016b) Transcript: Concrete problems in AI safety with Dario Amodei and Seth Baum. Future of Life InstituteGoogle Scholar
  11. Crawford K (2016) Artificial intelligence’s white guy problem. The New York TimesGoogle Scholar
  12. Dafoe A, Russell S (2016) Yes, we are worried about the existential risk of artificial intelligence. MIT Technology ReviewGoogle Scholar
  13. Etzioni O (2016) No, the experts don’t think superintelligent AI is a threat to humanity. MIT Technology ReviewGoogle Scholar
  14. Funkhouser K (2013) Paving the road ahead: autonomous vehicles, products liability, and the need for a new approach. Utah Law Rev 2013 1:437–462Google Scholar
  15. Future of Life Institute (no date). AI activities. (Accessed 2 May 2017)
  16. Garling C (2015) Andrew Ng: Why ‘deep learning’ is a mandate for humans, not just machines. WiredGoogle Scholar
  17. Goertzel B (2014) Artificial general intelligence: concept, state of the art, and future prospects. J Artif Gen Intell 5(1):1–48CrossRefGoogle Scholar
  18. Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, New YorkzbMATHGoogle Scholar
  19. Good IJ (1965) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–88Google Scholar
  20. Hackett R (2016) Watch Elon Musk divulge his biggest fear about artificial intelligence. FortuneGoogle Scholar
  21. Hammond DN (2015) Autonomous weapons and the problem of state accountability. Chic J Int Law 15:652–687Google Scholar
  22. Hanson R (2016) The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press, OxfordGoogle Scholar
  23. Hawking S, Tegmark M, Russell S, Wilczek F (2014) Transcending complacency on superintelligent machines. The Huffington PostGoogle Scholar
  24. Hern A (2016) Stephen Hawking: AI will be ‘either best or worst thing’ for humanity. The GuardianGoogle Scholar
  25. Koopmans TC (1974) Proof for a case where discounting advances the doomsday. Rev Econ Stud 41:117–120CrossRefGoogle Scholar
  26. Kurzweil R (2006) The singularity is near: when humans transcend biology. Viking, New YorkGoogle Scholar
  27. Legg S (2008) Machine Super Intelligence. Doctoral dissertation, University of LuganoGoogle Scholar
  28. McGinnis JO (2010) Accelerating AI. Northwest Univ Law Rev 104:366–381Google Scholar
  29. Moses LB (2007) Recurring dilemmas: the law’s race to keep up with technological change. Univ Ill J Law Technol Policy 2007(2):239–285MathSciNetGoogle Scholar
  30. Nilsson NJ (2010) The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, Cambridge, UKGoogle Scholar
  31. Price H (2013) Cambridge, cabs and Copenhagen: my route to existential risk. The New York TimesGoogle Scholar
  32. Ramsey FP (1928) A mathematical theory of saving. Econ J 38(152):543–559CrossRefGoogle Scholar
  33. Schienke EW, Tuana N, Brown DA, Davis KJ, Keller K, Shortle JS, Stickler M, Baum SD (2009) The role of the NSF Broader Impacts Criterion in enhancing research ethics pedagogy. Soc Epistemol 23(3–4):317–336CrossRefGoogle Scholar
  34. Scruggs L, Benegal S (2012) Declining public concern about climate change: can we blame the great recession? Glob Environ Change 22(2):505–515CrossRefGoogle Scholar
  35. Selin C (2007) Expectations and the emergence of nanotechnology. Sci Technol Hum Values 32(2):196–220MathSciNetCrossRefGoogle Scholar
  36. Shapin S (2010) Never pure: historical studies of science as if it was produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority. Johns Hopkins University Press, BaltimoreGoogle Scholar
  37. Weber EU (2006) Experience-based and description-based perceptions of long-term risk: why global warming does not scare us (yet). Clim Change 77(1–2):103–120CrossRefGoogle Scholar
  38. Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law. Va Environ Law J 31:307–364Google Scholar

Copyright information

© Springer-Verlag London 2017

Authors and Affiliations

  1. 1.Global Catastrophic Risk InstituteWashington, DCUSA

Personalised recommendations