, Volume 34, Issue 2, pp 301–312 | Cite as

Dreyfus on the “Fringe”: information processing, intelligent activity, and the future of thinking machines

  • Jeffrey WhiteEmail author
Original Article


From his preliminary analysis in 1965, Hubert Dreyfus projected a future much different than those with which his contemporaries were practically concerned, tempering their optimism in realizing something like human intelligence through conventional methods. At that time, he advised that there was nothing “directly” to be done toward machines with human-like intelligence, and that practical research should aim at a symbiosis between human beings and computers with computers doing what they do best, processing discrete symbols in formally structured problem domains. Fast-forward five decades, and his emphasis on the difference between two essential modes of processing, the unconscious yet purposeful mode fundamental to situated human cognition, and the “minded” sense of conscious processing characterizing symbolic reasoning that seems to lend itself to explicit programming, continues into the famous Dreyfus–McDowell debate. The present memorial reviews Dreyfus’ early projections, asking if the fears that punctuate current popular commentary on AI are warranted, and in light of these if he would deliver similar practical advice to researchers today.


Hubert Dreyfus Fringe consciousness Future of AI Artificial intelligence 



The author wishes to thank Karamjit Gill for this journal and for his support, as well as for arranging anonymous review resulting in insightful directions for significant improvements. This work is dedicated to Hubert Dreyfus.


  1. Armstrong S, Bostrom N, Shulman C (2016) Racing to the precipice: a model of artificial intelligence development. AI Soc 31(2):201–206CrossRefGoogle Scholar
  2. Atkinson RD (2015) The 2015 ITIF Luddite Award Nominees: The Worst of the Year’s Worst Innovation Killers. Accessed 25 Nov 2017
  3. Benthall S (2017) Do not fear the reaper: refuting bostrom’s superintelligence argument. Preprint, arXiv:1702.08495. Accessed 25 Nov 2017
  4. Bloom N, Jones CI, Reenen JV, Webb M (2017). Are ideas getting harder to find? Natl Bur Econ Res. Accessed 25 Nov 2017
  5. Bostrom N (2012) The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Mind Mach 22(2):71–85CrossRefGoogle Scholar
  6. Bostrom N (2013) Superintelligence: paths, dangers, strategies. Oxford University Press, OxfordGoogle Scholar
  7. Brey P (2001) Hubert Dreyfus—human versus machine. In: Achterhuis H, Crease RP (eds) American philosophy of technology: the empirical turn. Indiana University Press, Bloomington, pp 37–63Google Scholar
  8. Chalmers DJ (2010) The singularity: a philosophical analysis. J Conscious Stud 17:7–65Google Scholar
  9. da Cardoso CF, Pereira LM (2015) The emergence of artificial autonomy: a view from the foothills of a challenging. In: White J, Searle R (eds) Rethinking machine ethics in the age of ubiquitous technology. IGI Global, HauppageGoogle Scholar
  10. Dreyfus HL (1995) Heidegger on gaining a free relation to technology. In: Feenburg A, Hannay A (eds) Technology and the politics of knowledge. Indiana University Press, Bloomington, pp 25–33Google Scholar
  11. Dreyfus HL (2007) Why heideggerian ai failed and how fixing it would require making it more heideggerian. Artif Intell 171(18):1137–1160CrossRefGoogle Scholar
  12. Dreyfus HL (2012) A history of first step fallacies. Mind Mach 22(2):87–99CrossRefGoogle Scholar
  13. Dreyfus HL, Rand Corp Santa Monica Calif (1965) Alchemy and artificial intelligence. Defense Technical Information Center, Ft. Belvoir. Accessed 25 Nov 2017
  14. Future of Life Institute (2015) AI open letter. Accessed 25 Nov 2017
  15. Future of Life Institute (2017) An open letter to the united nations convention on certain conventional weapons. Accessed 26 Nov 2017
  16. Harris S (2016) TEDSummit J 2016, can we build AI without losing control of it? TED Summit, June 2016. Accessed 25 Nov 2017
  17. ITIF (2016) Artificial intelligence alarmists win ITIF’s annual luddite award.’s-annual-luddite-award. Accessed 25 Nov 2017
  18. Kenaw S (2008) Hubert L. Dreyfus’ critique of classical AI and its rationalist assumptions. Mind Mach 18(2):227–238CrossRefGoogle Scholar
  19. Kurzweil R (2005) The singularity is near: when humans transcend biology. Viking, New York, NYGoogle Scholar
  20. Lee K (2017) The real threat of artificial intelligence. Accessed 25 Nov 2017
  21. Musk E (2014) 2014 MIT AeroAstro Centennial Symposium. Accessed 25 Nov 2017
  22. Musk E (2017) May be initiated not by the country leaders, but one of the AI’s, if it decides that a preemptive strike is most probable path to victory. Accessed 25 Nov 2017
  23. Pereira LM, Saptawijaya A (2015) Bridging two realms of machine ethics. In: White J, Searle R (eds) Rethinking machine ethics in the age of ubiquitous technology. IGI Global, HauppageGoogle Scholar
  24. Pereira LM, Saptawijaya A (2016) Modeling collective morality via evolutionary game theory. In: Magnani L (ed) Programming machine ethics. Studies in applied philosophy, epistemology and rational ethics. Springer, Berlin, pp 26Google Scholar
  25. Putin V (2017) ‘Whoever leads in AI will rule the world’: Putin to Russian children on knowledge day. Accessed 25 Nov 2017
  26. Rouse J (2013) What is conceptually articulated understanding? In: Schear JK (ed) Mind, reason, and being-in-the-world: the McDowell-Dreyfus debate. Routledge, London, pp 250–271Google Scholar
  27. Russell S, Dewey D, Tegmark M (2015) Research priorities for robust and beneficial artificial intelligence. AI Magaz 36(4):105–114CrossRefGoogle Scholar
  28. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van DDG, Schrittwieser J et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489CrossRefGoogle Scholar
  29. Sun R (2001) Individual action and collective function: From sociology to multi-agent learning. Cogn Syst Res 2(1):1–3CrossRefGoogle Scholar
  30. Sun R (2013) Moral judgment, human motivation, and neural networks. Cognit Comput 5(4):566–579MathSciNetCrossRefGoogle Scholar
  31. Sun R (2018) Intrinsic motivation for truly autonomous agents. In: Abbass H, Scholz J, Reid D (eds) Foundations of trusted autonomy. Springer, BerlinGoogle Scholar
  32. Sun R, Helie S (2015) Accounting for creativity within a psychologically realistic cognitive architecture. In: Besold T, Schorlemmer M, Smaill A (eds) Computational creativity research: towards creative machines. Atlantis thinking machines, 7th edn. Atlantis Press, ParisGoogle Scholar
  33. Sun R, Wilson N, Lynch M (2016) Emotion: a unified mechanistic interpretation from a cognitive architecture. Cognit Comput 8(1):1–14CrossRefGoogle Scholar
  34. Tani J (2016) Exploring robotic minds: actions, symbols, and consciousness as self-organizing dynamic phenomena. Oxford University Press, OxfordCrossRefGoogle Scholar
  35. White J (2016) Simulation, self-extinction, and philosophy in the service of human civilization. AI Soc 31(2):171–190CrossRefGoogle Scholar
  36. White J, Tani J (2017) From biological to synthetic neurorobotics approaches to understanding the structure essential to consciousness (part 3). APA Newslett Philos Comput 17(1):11–22Google Scholar
  37. Williams B, Haugeland J, Dreyfus H (1974) An exchange on artificial intelligence. Accessed 25 Nov 2017

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.OIST - Cognitive neurorobotics, Tani groupOnnaJapan

Personalised recommendations