Abstract
In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
Similar content being viewed by others
References
Goertzel B (2012) Should humanity build a global AI nanny to delay the singularity until it’s better understood? J Conscious Stud 19(1–2):96–111
Muehlhauser L, Bostrom N (2014) Why we need friendly AI. Think 13(36):41–47
Pfeifer R, Scheier C (1999) Understanding intelligence. MIT Press, Cambridge
Restall G, Russell G (2010) Barriers to consequence. In: Pigden C (ed) Hume on is and ought. Palgrave Macmillan, Basingstoke, pp 243–259
Curmudgeon Corner
Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting on issues of concern to the research community and wider society. Whilst the drive for super-human intelligence promotes potential benefits to wider society, it also raises deep concerns of existential risk, thereby highlighting the need for an ongoing conversation between technology and society. At the core of Curmudgeon concern is the question: What is it to be human in the age of the AI machine? -Editor.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Boyles, R.J.M., Joaquin, J.J. Why friendly AIs won’t be that friendly: a friendly reply to Muehlhauser and Bostrom. AI & Soc 35, 505–507 (2020). https://doi.org/10.1007/s00146-019-00903-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-019-00903-0