Skip to main content

Advertisement

Log in

Echoes of myth and magic in the language of Artificial Intelligence

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

“We have lived so long with the conviction that robots are possible, even just around the corner, that we can’t help hastening their arrival with magic incantations.”

Drew McDermott, 1981, p. 145.

Abstract

To a greater extent than in other technical domains, research and progress in Artificial Intelligence has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion. In this essay we present varied examples that illustrate how these analogies have guided not only readings of the AI enterprise by commentators outside the community but also inspired AI researchers themselves. Owing to their influence, we pay particular attention to the similarities between religious language and the way in which the potential advent of greater than human intelligence is presented contemporarily. We then move on to the role that fiction, science fiction most of all, has historically played and is still playing in the discussion of AI by influencing researchers and the public, shifting the weights of different scenarios in our collectively perceived probability space. We sum up by arguing that the lore surrounding AI research, ancient and modern, points to the ancestral and shared human motivations that drive researchers in their pursuit and fascinate humanity at large. These points of narrative entanglement where AI meets the wider culture should serve to amplify the call to engage ourselves with the discussion of the potential destination of this technology.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. For a highly poetic rendering of our all too human tendency to liken the mind to anything but itself, including mirrors, consider the following passage by George Eliot, that crown jewel of psychological belles lettres: “It is astonishing what a different result one gets by changing the metaphor! Once call the brain an intellectual stomach, and one’s ingenious conception of the classics and geometry as ploughs and harrows seems to settle nothing. But then, it is open to someone else to follow great authorities and call the mind a sheet of white paper or a mirror, in which case one’s knowledge of the digestive process becomes quite irrelevant. It was doubtless an ingenious idea to call the camel the ship of the desert, but it would hardly lead one far in training that useful beast. O Aristotle! if you had the advantage of being “the freshest modern” instead of the greatest ancient, would you not have mingled your praise of metaphorical speech as a sign of high intelligence, with a lamentation that intelligence so rarely shows itself in speech without metaphor,—that we can so seldom declare what a thing is, except by saying it is something else?” (Eliot 1997, p. 125). For an insightful in-depth treatment of the theoretical consequences of modeling the mind as a computer see Hurtado (2017).

  2. The poet T.S. Eliot, a friend of his youth, once described him (in a private letter) as “a great wonderful fat toad bloated with wisdom.” (Eliot 2011, p. 108).

  3. Butler’s closing remarks in the same piece (though it is hard to discern whether they be not at least partially tongue-in-cheek) radiate such passionate neo-luddite appeal that they might well have inspired Frank Herbert (1965), one of science-fiction’s most dearly cherished authors, in his masterpiece of geopolitical and philosophical intrigue, Dune, to give the name ‘Butlerian Jihad’ to a crusade that led to a galaxy-wide ban on thinking machines: “Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question. Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage.” (Butler 1863, ¶ 7)

  4. Just as in Wiener’s, in the following passage from William James we see how the single-mindedness of machines can coexist with their endowment with minds as a cause for concern: “A machine in working order functions fatally in one way. Our consciousness calls this the right way. Take out a valve, throw a wheel out of gear or bend a pivot, and it becomes a different machine, functioning just as fatally in another way which we call the wrong way. But the machine itself knows nothing of wrong or right: matter has no ideals to pursue. A locomotive will carry its train through an open drawbridge as cheerfully as to any other destination.” (James 1879, ¶ 37)

  5. Also in psychotherapy, as is well illustrated by the following example, dealing with personal styles among experienced practitioners and the difficulties facing disciples who seek to acquire the master’s way: A famed and reputedly brilliant clinical psychologist had successfully dealt with a chronically depressed patient by—during her most heightened crises—attentively listening to her and then, matter-of-factly but looking her straight in the eye, saying: “Well, then go ahead and kill yourself!”. These ritual words had always succeeded in putting the patient at ease and making her see things in a sobering perspective. The therapist was understandably aghast, then, when upon returning from a long vacation she came to learn that the student in training under whose care she had temporarily left the patient had been only too keen to echo her enchantment, and the patient, in turn, had this time obediently heeded the advice.

  6. In a symmetrical way, many qualitative researchers have, for similar reasons, adopted the techniques of their quantitative colleagues. See Musa et al. (2015).

  7. It bears mentioning that in a volume put forth by Edge Magazine, attempting to capture the thoughts of nearly two hundred scholars and thinkers on the topic of machines that think, Freeman Dyson offers the shortest response. After declaring his general skepticism that such machines will ever come to exist, he simply adds: “If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant.” (Dyson 2015, p. 47)

  8. Although there are some differences in flavour and shading between the terms ‘extropianism’ and ‘transhumanism’ (as well as within the use of the term ‘transhumanism’ itself on the part of different writers) for the purposes of this essay we will use them interchangeably.

  9. In addition to the socialist antecedent, Burkhead (1997, ¶ 8) offers another biblical forebear to this grand scheme: “The vision of a transhuman condition goes all the way back to Isaiah. Never again will there be in it [the new Jerusalem] an infant who lives but a few days, or an old man who does not live out his years; he who dies at a hundred will be thought a mere youth; he who fails to reach a hundred will be considered accursed.”

  10. It must be clarified, however, that despite the existence of certain foundational texts and certain prominent figures and institutions that act as attractors, there is no real unified organization that would encompass all of those that would identify as transhumanists. Speaking of AI makers, AI researchers and, for that matter, even transhumanists, as though they were one single unified front in terms of belief and purpose is a misleading overgeneralization. A cursory perusal of the individual writings of key figures will show just how manifold the viewpoints they hold are.

  11. Ever the masterful salesman, Kurzweil opens the article on his law with: “You will get $40 trillion just by reading this essay and understanding what it says” (2001, ¶ 2). Lest my own readers should abandon this paper and instantly flock there in pursuit of so tasty a reward, I must add, malgré moi, the spoiler that by the end of the piece he explains that: “The English word ‘you’ can be singular or plural. I meant it in the sense of ‘all of you’” (2001, ¶ 268).

  12. Transhumanism critic HP LaLancette (2007) takes this form of reasoning to its paroxysmic logical conclusion, pointing out that the very same argument can also be used to prove that the end goal of natural selection is the creation of the toilet brush. All that is needed is to replace the relevant landmarks. Thus, the Big Bang took place 13.7 billion years ago, after which another 10 had to elapse for life on Earth to arise. The appearance of the digestive tract, however, took only a further 2,75 and from then on the sphincter showed up merely another 575 million years hence. This projection leads us to the inescapable conclusion: eventually the whole universe will turn into one giant toilet brush.

  13. While not identical to theirs, this classification owes much clarity to Cave and Dihal’s recent typology of the “ways in which these narratives [of hope and fear] could shape [AI] technology and its impact.” (Cave and Dihal 2019, p. 74)

  14. Not to mention that AI researchers do not merely consume sci-fi but produce it as well. To single but two prominent examples, both John McCarthy and Marvin Minsky, starring figures at the Dartmouth Conference on Artificial Intelligence, which many consider the official birthplace of the field (Kline 2011), have contributed their talents to the narrative arts. Minsky co-authored the technothriller The Turing Option (Harrison and Minsky 1992) and McCarthy (2014) penned the delightful short story The Robot and the Baby, which shows just how hard it is to prevent people from anthropomorphizing automata.

  15. In his Foreword to the Millennial Edition of 2001: A Space Odyssey, Arthur C. Clarke reproduces a touching letter sent to him by astronaut Joseph Allen, mission specialist on the Space shuttle program: “Dear Arthur, When I was a boy, you infected me with both the writing bug and the space bug, but neglected to tell me how difficult either undertaking can be.” (Clarke 2000, p. xviii)

  16. Carnegie Mellon (academic home of Newell and Simon) is not just any university when it comes to the history of AI. Along with Minsky’s MIT, McCarthy’s Stanford and the Stanford Research Institute, it is one of the main four centers where AI took off. Seeking to characterize their differing styles, Pamela McCorduck offered this droll analogy between AI and the garment industry: “Consider MIT haute couture, the Women’s Wear Daily of the field. No sooner do hemlines go down with enormous fanfare than they go up again, the provinces growing dizzy with trying to keep pace and usually falling behind. MIT thinks itself stylish, but outsiders have been known to call it faddish. Carnegie Mellon, on the contrary, represents old-world craftsmanship, attending to detail and using the finest materials. These qualities presumably speak for themselves in gowns you can wear to a dinner party ten years from now and never fear the seams might part. But classic can be stodgy: if Queen Elizabeth of England bought artificial intelligence, she’d surely buy at Carnegie Mellon. Stanford has two ateliers. The first is the Levis’ jeans of AI: sturdy, durable, democratic; worn by socialites and welfare clients alike; and mentioned proudly by everyone in the trade whenever questions of practicality or utility come up. The other is Nudist World, incorporating After Six; this shop is visionary about the formal wear of the future, but meanwhile remains naked. Finally, Stanford Research Institute is Seventh Avenue. Maybe those models are knock-offs, but hardly anyone can afford haute couture, and except for the jeans people, who else is going to bring AI into the real world?” (McCorduck 1979, p. 112)

  17. Renowned, among other things, for being the namesake and coiner of Sturgeon’s Law, which states that while it’s true that 90% of science fiction is crap, that is only because 90% of everything is crap.

  18. Compare with Dryden’s (1913) rendering of Pygmalion’s enthrallment to his creation, as told by Ovid: Pleas’d with his Idol, he commends, admires, Adores; and last, the Thing ador’d, desires.

  19. The three laws made their first formal appearance in Asimov’s (1942) short story Runaround. To this story, Marvin Minsky claims a deep debt: “After ‘Runaround’ appeared in the March 1942 issue of Astounding, I never stopped thinking about how minds might work. Surely we'd someday build robots that think. But how would they think and about what?” (Minsky cited in Markoff 1992, ¶18).

  20. Contrary to what the example suggests, the goal of some AI system needs not be particularly stupid to be extremely dangerous. Stephen Omohundro has argued that even a chess-playing robot “will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.” (Omohundro 2008, p. 483)

  21. If you can’t beat them, join them, folksy wisdom asserts, and that is precisely what Yudkowsky did from 2010 to 2015 when he wrote his acclaimed spin on the Harry Potter franchise (Yudkowsky 2015). Hailed as one of the most successful fan fictions ever written (Whelan 2015), Harry Potter and the Methods of Rationality portrays Harry as a precocious genius that unleashes the whole arsenal of scientific reasoning upon the functioning of the magic world in order to maximize his own power (and optimize the world while he's at it). Keeping in line with Geraci’s (2012, p. 40) claim that the incursions of the AI community into the realm of fiction crafting are more often than not evangelical in nature and are never written just for fun, HPMOR, as it is popularly known, is an attempt, much like Yudkowsky’s Center for Applied Rationality, to induct young talents into the practice of Bayesian thinking, which could set them on a path of preventing the emergence of hostile superintelligences.

  22. We mean ‘tacit’ in the sense of Polanyi 1983.

  23. A very vivid case in point is a recent flashy headline that made the rounds of social media, to the effect that Facebook had been forced to shut down some of its Artificial Intelligence agents since they had developed their own secret language and started communicating with each other to the befuddlement of their creators (Griffin 2017; Bradley 2017; Collins 2017). What actually happened, though, is that chatbots designed for interaction with humans in a negotiation setting drifted from using conventional English and the researchers simply refined their reward schema to keep them on track with language that was grammatical (Lewis et al. 2017).

References

  • Asimov I (1942) Runaround. Astound Sci Fict 29:94–103

    Google Scholar 

  • Barrat J (2013) Our final invention: artificial intelligence and the end of the human era. Thomas Dunne Books, Chicago

    Google Scholar 

  • Barutta J, Cornejo C, Ibáñez A (2011) Theories and theorizers: a contextual approach to theories of cognition. Integr Psychol Behav Sci 45(2):223–246

    Google Scholar 

  • Bates RA (2011) AI & SciFi: teaching writing, history, technology, literature, and ethics. In: Paper presented at 2011 ASEE annual conference & exposition, Vancouver, BC. https://peer.asee.org/17433. Accessed 25 Apr 2019

  • Bostrom N (2003) Ethical issues in advanced artificial intelligence. In: Smit I (ed) Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol 2, pp 12–17

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford, United Kingdom

    Google Scholar 

  • Bostrom N (2015) It’s still early days. In: Brockman J (ed) What to think about machines that think. HarperCollins, New Year, pp 126–127

    Google Scholar 

  • Bova B (1974) The role of science fiction. In: Bretnor R (ed) Science fiction today and tomorrow. Harper & Row, New Year

    Google Scholar 

  • Bradley T (2017) Facebook AI creates its own language in creepy preview of our potential future. Forbes. https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future. . Accessed 25 Apr 2019

  • Brautigan R (1967) All watched over by machines of loving grace. The Communication Company, San Francisco

    Google Scholar 

  • Burkhead L (1997) Extropianism in the memetic ecosystem. Extropians Message Board

  • Butler S (1863) Darwin among the machines. Christchurch Press, June 13. http://www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html. Accessed 25 Apr 2019

  • Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74

    Google Scholar 

  • Cave S, Coughlan K, Dihal K (2019) ‘Scary robots’: examining public responses to AI. In: Proc. AIES. http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_200.pdf. Accessed 25 Apr 2019

  • Chalmers DJ (2010) The singularity: a philosophical analysis. J Conscious Stud 17:7–65. http://consc.net/papers/singularity.pdf. Accessed 25 Apr 2019

  • Clark SRL (1995) Tools, machines, and marvels. In: Fellows R (ed) Philosophy and technology. Cambridge University Press, Cambridge

    Google Scholar 

  • Clarke AC (2000) 2001: a space Odyssey. ROC, New Year

    Google Scholar 

  • Collins T (2017) Facebook shuts down controversial chatbot experiment after AIs develop their own language to talk to each other. Daily Mail. https://www.dailymail.co.uk/sciencetech/article-4747914/Facebook-shuts-chatbots-make-language.html. Accessed 25 Apr 2019

  • Computer History Museum (2017) Oral History of John McCarthy [Video file]. http://www.youtube.com/watch?v=KuU82i3hi8c. Accessed 25 Apr 2019

  • Comrada N (1995) Golem and robot: a search for connections. J Fantas Arts 7(2/3):244–254

    Google Scholar 

  • Cornejo C, Musa R (2017) The physiognomic unity of sign, word, and gesture. Behav Brain Sci 40:E51. https://doi.org/10.1017/S0140525X15002861

    Article  Google Scholar 

  • Cramer JG (1990) Technology fiction (Part I). Foresight update 8, March 15. http://www.islandone.org/Foresight/Updates/Update08/Update08.2.html. Accessed 25 Apr 2019

  • DeBaets AM (2015) Rapture of the Geeks: singularitarianism, feminism, and the yearning for transcendence. In: Mercer C, Trothen TJ (eds) Religion and transhumanism. Praeger, CA, pp 181–197

    Google Scholar 

  • Deutsch D (2011) The beginning of infinity. Allen Lane, London

    Google Scholar 

  • Dreyfus HL (1992) What computers still can’t do: a critique of artificial reason. MIT, Cambridge, Mass

    Google Scholar 

  • Dryden J (1913) The Poems of John Dryden, ed. by John Sargeaunt. Oxford University Press, London. https://www.bartleby.com/204/199.html. Accessed 25 Apr 2019

  • Dyson G (2005) Turing’s cathedral. Edge. http://www.edge.org/conversation/george_dyson-turings-cathedral. Accessed 25 Apr 2019

  • Dyson F (2015) I could be wrong. In: Brockman J (ed) What to think about machines that think. HarperCollins, NY, pp 126–127

    Google Scholar 

  • Egan G (1997) Diaspora. Orion, London

    Google Scholar 

  • Eliot G (1997) The mill on the floss. Wordsworth Editions, Hertfortshire

    Google Scholar 

  • Eliot TS (2011) [Letter written December 31, 1914 to Conrad Aiken]. In: The letters of T.S. Eliot, vol 1. London: Faber and Faber

  • Fast E, Horvitz E (2017) Long-term trends in the public perception of artificial intelligence. In: Thirty-first AAAI conference on artificial intelligence. https://arxiv.org/abs/1609.04904 (2016)

  • Fellows R (1995) Welcome to Wales: Searle on the computational theory of mind. In: Fellows R (ed) Philosophy and technology. Cambridge University Press, Cambridge

    Google Scholar 

  • Feynman R (1985) Surely you’re joking, Mr. Feynman! Adventures of a curious character. Bantam Books, New York

    Google Scholar 

  • Foerst A (2004) God in the machine: what robots teach us about humanity and God. Dutton, New York

    Google Scholar 

  • Geraci RM (2008) Apocalyptic AI: religion and the promise of artificial intelligence. J Am Acad Relig 76(1):138–166

    MathSciNet  Google Scholar 

  • Geraci RM (2012) Apocalyptic AI: visions of heaven in robotics, artificial intelligence, and virtual reality. Oxford University Press, Oxford

    Google Scholar 

  • Goldsmith J, Mattei N (2014) Fiction as an introduction to computer science research. ACM TOCE 14(1):4

    Google Scholar 

  • Good IJ (1966) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–88

    Google Scholar 

  • Griffin A (2017). Facebook’s artificial intelligence robots shut down after they start talking to each other in their own language. The Independent. https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html. Accessed 25 Apr 2019

  • Halpern M (2008) The Trojan Laptop. Vocabula review, vol 10, Issue 1. http://www.rules-of-the-game.com/com007-trojanlaptop.htm. Accessed 25 Apr 2019

  • Harrison H, Minsky M (1992) The Turing option. Warner, New Year

    Google Scholar 

  • Herbert F (1965) Dune. Chilton books, Philadelphia

    Google Scholar 

  • Herbert F (1974) Science fiction and a world in crisis. In: Bretnor R (ed) Science fiction today and tomorrow. Harper & Row, New Year

    Google Scholar 

  • Hess DJ (1995) On low-tech cyborgs. In: Gray CH, Figueroa-Sarriera H, Mentor S (eds) The cyborg handbook. Routledge, New York, pp 371–378

    Google Scholar 

  • Hurtado E (2017) Consequences of theoretically modeling the mind as a computer. Doctoral dissertation, Pontificia Universidad Católica de Chile. https://repositorio.uc.cl/handle/11534/21956

  • Ingold T (2007) Lines: a brief history. Routledge, Oxon

    Google Scholar 

  • James W (1879) Are we automata? Mind 4:1–22. http://psychclassics.yorku.ca/James/automata.htm. Accessed 25 Apr 2019

  • Johnson G (1998) Science and religion: bridging the great divide. New York Times. http://www.nytimes.com/1998/06/30/science/essay-science-and-religion-bridging-the-great-divide.html. Accessed 25 Apr 2019

  • Kline R (2011) Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Ann Hist Comput 33(4):5–16

    MathSciNet  Google Scholar 

  • Kress G (2010) Multimodality. Routledge, London

    Google Scholar 

  • Kurzweil R (1990) The age of intelligent machines. MIT Press, Cambridge

    Google Scholar 

  • Kurzweil R (2001) The law of accelerating returns. KurzweilAI.net. https://www.kurzweilai.net/the-law-of-accelerating-returns. Accessed 25 Apr 2019

  • Kurzweil R (2005) The singularity is near: when humans transcend biology. Penguin, New York

    Google Scholar 

  • LaGrandeur K (2003) Magical code and coded magic: the persistence of occult ideas in modern gaming and computing. In: Paper presented at the conference of the society for literature, science and the arts. http://ieet.org/index.php/IEET/more/lagrandeur20131026. Accessed 25 Apr 2019

  • LaLancette HP (2007) The law of accelerating toilet brushes. http://blog.infeasible.org/2007/05/29/the-law-of-accelerating-toilet-brushes.aspx. Accessed 27 Nov 2012

  • Lancaster BL (1997) The golem as a transpersonal image: a marker of cultural change. Transpers Psychol Rev 1(3):5–11

    Google Scholar 

  • Lancaster BL (2007) La esencia de la Kábala: La enseñanza interior del Judaísmo. EDAF, Madrid

    Google Scholar 

  • Latour B (1987) Science in action. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Lewis M, Yarats D, Dauphin YN, Parikh D, Batra D (2017) Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125

  • Lin P, Bekey G, Abney K (2008) Autonomous military robotics: risk, ethics, and design. California Polytechnic State University, San Luis Obispo

    Google Scholar 

  • Markoff, J. (1992, April 12). Technology; A Celebration of Isaac Asimov. The New York Times

  • McCarthy J (2014) The robot and the baby. In: Wilson DH, Adams JJ (eds) Robot uprisings. Simon & Schuster, London, pp 343–362. http://www-formal.stanford.edu/jmc/robotandbaby/robotandbaby.html. Accessed 25 Apr 2019

  • McCorduck P (1979) Machines who think. W. H. Freeman and Company, San Francisco

    Google Scholar 

  • McCorduck P (2004) Foreword. In: Machines who think. A K Peters, Natick

    Google Scholar 

  • McDermott D (1981) Artificial intelligence meets natural stupidity. In: Haugeland J (ed) Mind design. MIT, Cambridge, pp 143–160

    Google Scholar 

  • Melzer A (2007) On the pedagogical motive for esoteric writing. J Polit 69(4):1015–1031

    Google Scholar 

  • Minsky M (2007) Scientist on the set: an interview with Marvin Minsky/interviewer: David G. Stork [Transcript]. https://web.archive.org/web/20071113031417/, http://mitpress.mit.edu/e-books/Hal/chap2/two3.html. Accessed 25 Apr 2019

  • Musa R (2019) Computer machinery and the benefit of the doubt. In: Myths, tests and games: cultural roots and current routes of artificial intelligence. Doctoral dissertation, Pontificia Universidad Católica de Chile. https://repositorio.uc.cl/handle/11534/28511

  • Musa R, Olivares H, Cornejo C (2015) Aesthetic aspects of the use of qualitative methods in psychological research. In: Marsico G, Ruggieri RA, Salvatore S (eds) Reflexivity and psychology. Information Age, Charlotte, pp 87–116

    Google Scholar 

  • Newell A (1992) Fairy Tales. AI Mag 13(4):46–48

    Google Scholar 

  • Noble DF (1999) The religion of technology: the divinity of man and the spirit of invention. Penguin, New York

    Google Scholar 

  • Oatley K, Mar RA, Djikic M (in press) The psychology of fiction: present and future. In: Jaén EI, Simon J (eds) The cognition of literature. Yale University Press, New Haven

  • Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial general intelligence 2008: proceedings of the first AGI conference. Frontiers in artificial intelligence and applications 171. Amsterdam: IOS, pp 483–492

  • Orwell G (1946) Politics and the English language. Horizon 13(76):252–265

    Google Scholar 

  • Pels P (2013) Amazing stories: how science fiction sacralizes the secular. In: Stolow J (ed) Deus in machina: religion, technology, and the things in between. Fordham University Press, New York

    Google Scholar 

  • Polanyi M (1983) The tacit dimension. Peter Smith Publisher Inc, Gloucester

    Google Scholar 

  • Rosas R (1992) ¿Comerán los androides el fruto prohibido? Reflexiones acerca del Test de Turing. Apuntes de Ingeniería 45(1992):111–129

    Google Scholar 

  • Ross, G. (2007). An interview with Douglas R. Hofstadter. American Scientist

  • Sandberg A (2010) An overview of models of technological singularity. In: Roadmaps to AGI and the future of AGI Workshop, Lugano, Switzerland, March, vol 8

  • Scortia TN (1974) Science fiction as the imaginary experiment. In: Bretnor R (ed) Science fiction today and tomorrow. Harper & Row, New York

    Google Scholar 

  • Shelley M (1818) Frankenstein; or, the modern prometheus. M. K. Joseph, London

    Google Scholar 

  • Sotala K (2007). The logical fallacy of generalization from fictional evidence [Blog comment]. https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional#gchLRgHocaajGkEy2. Accessed 25 Apr 2019

  • Sturgeon T (1974) Science fiction, morals, and religion. In: Bretnor R (ed) Science fiction today and tomorrow. Harper & Row, New York

    Google Scholar 

  • Tambe M, Balsamo A, Bowring E (2008) Using science fiction in teaching artificial intelligence. In: AAAI Spring symposium, pp 86–91

  • Taube M (1961) Computers and common sense: the myth of thinking machines. Columbia University Press, NY

    Google Scholar 

  • Thiel P, Masters B (2014) Zero to one: notes on startups, or how to build the future. Crown Business, New York

    Google Scholar 

  • Valsiner J (2009) Between fiction and reality: transforming the semiotic object. Sign Syst Stud 37(1/2):99–113

    Google Scholar 

  • Van Leeuwen T (2004) Introducing social semiotics: an introductory textbook. Routledge, London

    Google Scholar 

  • Vico G (1948) The new science of Giambattista Vico. (Translated by Thomas Goddard Bergin & Max Harold Fisch). Cornell University Press, Ithaca

  • Voegelin E (1952) The new science of politics. University of Chicago Press, Chicago

    Google Scholar 

  • Whelan D (2015. The Harry Potter fan fiction author who wants to make everyone a little more rational. VICE. https://www.vice.com/en_us/article/gq84xy/theres-something-weird-happening-in-the-world-of-harry-potter-168. Accessed 25 Apr 2019

  • Wiener N (1964) God & Golem Inc: a comment on certain points where cybernetics impinges on religion. MIT, Cambridge

    Google Scholar 

  • Williams R (1994) The metamorphosis of prime intellect. http://localroger.com/prime-intellect/mopiidx.html. Accessed 25 Apr 2019

  • Yudkowsky E (2004) Coherent extrapolated volition. Machine Intelligence Research Institute, Berkeley. https://intelligence.org/files/CEV.pdf. Accessed 25 Apr 2019

  • Yudkowsky E (2007a) The logical fallacy of generalization from fictional evidence. [Blog post]. http://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional. Accessed 25 Apr 2019

  • Yudkowsky E (2007b) An alien god. [Blog post]. http://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god. Accessed 25 Apr 2019

  • Yudkowsky E (2008a) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, New York, pp 308–345

    Google Scholar 

  • Yudkowsky E (2008b) Cognitive biases potentially affecting judgment of global risks. In: Bostrom N, Ćirković MM (eds) Global catastrophic risks. Oxford University Press, New York, pp 91–119

    Google Scholar 

  • Yudkowsky E (2011) Complex value systems are required to realize valuable futures. In: Schmidhuber J, Thórisson KR, Looks M (eds) Artificial general intelligence: 4th international conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings, pp 388–393. https://intelligence.org/files/ComplexValues.pdf. Accessed 25 Apr 2019

  • Yudkowsky E (2015) Harry Potter and the methods of rationality. http://www.hpmor.com. Accessed 25 Apr 2019

Download references

Funding

This work is sponsored by grants from CONICYT and Pontificia Universidad Católica de Chile.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Musa Giuliano.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Musa Giuliano, R. Echoes of myth and magic in the language of Artificial Intelligence. AI & Soc 35, 1009–1024 (2020). https://doi.org/10.1007/s00146-020-00966-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-00966-4

Keywords

Navigation