Minds and Machines

, Volume 27, Issue 4, pp 575–590 | Cite as

Reframing AI Discourse

  • Deborah G. Johnson
  • Mario VerdicchioEmail author


A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline.


Artificial intelligence Autonomy Future Robots Sociotechnical systems 


  1. Ahmed, A. (2014). The thistle and the drone: How America’s war on terror became a global war on Tribal Islam. Noida: Harper Collins Publishers India.Google Scholar
  2. Aidyia. (2016). Aidyia: About us. Retrieved from
  3. Amazon. (2016). Amazon Prime Air. Retrieved from
  4. Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York City, NY: Thomas Dunne Books.Google Scholar
  5. Berkowitz, R. (2014). Drones and the question of “The Human”. Ethics & International Affairs, 28(2), 159–169.CrossRefGoogle Scholar
  6. Bijker, W. E. (1993). Do not despair: There is life after constructivism. Science, Technology and Human Values, 18(1), 113–138.CrossRefGoogle Scholar
  7. Bijker, W. E. (1997). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. Cambridge, MA: MIT Press.Google Scholar
  8. Callon, M. (1999). Actor-network theory—The market test. The Sociological Review, 47(S1), 181–195.CrossRefGoogle Scholar
  9. Carr, N. (2014). The glass cage: Automation and us. New York City, NY: W. W. Norton & Company.Google Scholar
  10. Future of Life Institute. (2015). Research priorities for robust and beneficial artificial intelligence. Retrieved from
  11. Future of Life Institute. (2015). Autonomous weapons: An open letter from AI & robotics researchers. Retrieved from
  12. Gaudin, S. (2015). Stephen Hawking fears robots could take over in 100 years. ComputerWorld, 14 May 2015.Google Scholar
  13. Google. (2016). Google self-driving car project. Retrieved from
  14. Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.CrossRefGoogle Scholar
  15. Heyns, C. (2013). Report of the special rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. United Nations Human Rights Council, Session 23, 9 April 2013.Google Scholar
  16. Irmak, N. (2012). Software is an abstract artifact. Grazer Philosophische Studien, 86, 55–72.Google Scholar
  17. Itskov, D. (2016). 2045 strategic social initiative. Retrieved from
  18. Jennewein, T., Achleitner, U., Weihs, G., Weinfurter, H., & Zeilinger, A. (1999). A fast and compact quantum random number generator. Retrieved from
  19. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.CrossRefGoogle Scholar
  20. Kant, I. (1785/2002). Groundwork of the metaphysics of morals (trans: Wood, A. W.). New Haven: Yale University Press.Google Scholar
  21. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Penguin Books.Google Scholar
  22. Latour, L. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society studies in sociotechnical change. Cambridge, MA: MIT Press.Google Scholar
  23. Levine, S., Pastor, P., Krizhevsky, A., & Quillen, D. (2016). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Google Preliminary Report.
  24. MacKenzie, D. (2014). A sociology of algorithms: High-frequency trading and the shaping of markets. Retrieved from
  25. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRefGoogle Scholar
  26. Metz, C. (2016). The rise of the artificially intelligent hedge fund. Wired, 25 January 2016.Google Scholar
  27. Mindell, D. A. (2015). Our robots, ourselves: Robotics and the myths of autonomy. New York City, NY: Viking Press.Google Scholar
  28. Minski, M. (2013). Dr. Marvin MinskyFacing the future. Retrieved from
  29. Morton, O. (2014). Good and ready. The Economist, 29 March 2014.Google Scholar
  30. Omohundro, S. (2016). Autonomous technology and the greater human good. In V. Müller (Ed.), Risks of artificial intelligence (pp. 9–27). Boca Raton, FL: CRC Press.Google Scholar
  31. Storm, D. (2015). Steve Wozniak on AI: Will we be pets or mere ants to be squashed our robot overlords? ComputerWorld, 25 March 2015.Google Scholar
  32. Yampolskiy, R. V. (2016). Utility function security in artificially intelligent agents. In V. Müller (Ed.), Risks of artificial intelligence (pp. 115–140). Boca Raton, FL: CRC Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  1. 1.University of VirginiaCharlottesvilleUSA
  2. 2.Università degli Studi di BergamoBergamoItaly

Personalised recommendations