pp 1–8 | Cite as

AI Assistants and the Paradox of Internal Automaticity

  • William A. BauerEmail author
  • Veljko Dubljević
Original Paper


What is the ethical impact of artificial intelligence (AI) assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society.


AI assistants AI ethics Autonomy Internal automaticity External automaticity Cognition 



The authors would like to thank two anonymous reviewers for detailed, helpful comments, and audience members at a presentation at the North Carolina Philosophical Society meeting in Greensboro, NC (March 8, 2019), for helpful questions and comments. Special thanks to Sean Douglas and Matthew Ferguson for valuable research assistance.


  1. 1.
    Reiner, P.B., and S.K. Nagel. 2017. Technologies of the extended mind: defining the issues. In Neuroethics: Anticipating the Future, ed. J. Illes, 108–122. New York: Oxford University Press.Google Scholar
  2. 2.
    Fujita, A. 2012. GPS tracking disaster: Japanese tourists drive straight into the Pacific. ABC News. (Accessed 24 May 2019.)
  3. 3.
    Etzioni, A., and O. Etzioni. 2016. AI assisted ethics. Ethics and Information Technology 18: 149–156. Scholar
  4. 4.
    Clark, A., and D. Chalmers. 1998. The extended mind. Analysis 58 (1): 7–19.CrossRefGoogle Scholar
  5. 5.
    Buller, T. 2013. Neurotechnology, invasiveness and the extended mind. Neuroethics 6 (3): 593–605.CrossRefGoogle Scholar
  6. 6.
    Hernández-Orallo, J. & Vold, K. (2019). AI extenders: the ethical and societal implications of humans cognitively extended by AI. Association for the Advancement of Artificial Intelligence.Google Scholar
  7. 7.
    Danaher, J. 2018. Toward an ethics of AI assistants: an initial framework. Journal of Philosophy and Technology 31: 629–653. Scholar
  8. 8.
    Dubljević, V. 2013. Autonomy in neuroethics: Political and not metaphysical. AJOB Neuroscience 4 (4): 44–51.CrossRefGoogle Scholar
  9. 9.
    Bell, E., V. Dubljević, and E. Racine. 2013. Nudging without ethical fudging: clarifying physician obligations to avoid ethical compromise. American Journal of Bioethics 13 (6): 18–19.CrossRefGoogle Scholar
  10. 10.
    Dubljević, V. 2016. Autonomy is political, pragmatic and post-metaphysical: a reply to open peer commentaries on ‘Autonomy in Neuroethics’. AJOB Neuroscience 7 (4): W1–W3.CrossRefGoogle Scholar
  11. 11.
    Carr, N.G. 2014. The glass cage: Automation and us. New York: W.W. Norton.Google Scholar
  12. 12.
    Krakauer, D. (2016). Will A.I. harm us? Better to ask how we’ll reckon with our hybrid nature. Nautilus. (Accessed 31 July 2018.)
  13. 13.
    Raz, J. 1986. The morality of freedom. New York: Oxford University Press.Google Scholar
  14. 14.
    Ellis, B. 2013. The power of agency. In Powers and capacities in philosophy, ed. R. Groff and J. Greco, 186–206. New York: Routledge.CrossRefGoogle Scholar
  15. 15.
    Rawls, J. 1985. Justice as fairness: political not metaphysical. Philosophy & Public Affairs 14 (3): 223–251.Google Scholar
  16. 16.
    Nagel, S.K. 2013. Autonomy—a genuinely gradual phenomenon. AJOB Neuroscience 4 (4): 60–61.CrossRefGoogle Scholar
  17. 17.
    Dubljević, V., S. Sattler, and E. Racine. 2018. Deciphering moral intuition: how agents, deeds and consequences influence moral judgment. PLoS One.
  18. 18.
    Vohs, K.D., R.F. Baumeister, B.J. Schmeichel, J.M. Twenge, N.M. Nelson, and D.M. Tice. 2008. Making choices impairs subsequent self-control: a limited-resource account of decision making, self-regulation, and active initiative. Journal of Personality and Social Psychology 94 (5): 883–898.CrossRefGoogle Scholar
  19. 19.
    Hejtmánek, L., I. Oravcová, J. Motýl, J. Horáček, and I. Fajnerov. 2018. Spatial knowledge impairment after GPS guided navigation: eye-tracking study in a virtual town. International Journal of Human-Computer Studies 116: 15–24. Scholar
  20. 20.
    Rahwan, I. 2017. Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology 20: 5–14.CrossRefGoogle Scholar
  21. 21.
    European Union [EU] (2016). Regulation 2016/679 of the European parliament and of the council. Official Journal of the European Union.Google Scholar
  22. 22.
    Metz, C. Is ethical A.I. even possible? (2019.) The New York Times. (Accessed 29 March 2019.)

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Philosophy and Religious StudiesNorth Carolina State UniversityRaleighUSA

Personalised recommendations