Toward an Ethics of AI Assistants: an Initial Framework


Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

This is a preview of subscription content, access via your institution.


  1. 1.

    The quotes come from John McCarthy ‘What is Artificial Intelligence? Basic Questions’ available at – note that this quote and the quote from Russell and Norvig was originally sourced through Scherer 2016

  2. 2.

    Indeed, this literature is explicitly invoked by many of the critics of AI assistance e.g. Carr 2014, Krakauer 2016, and Crawford 2015.

  3. 3.

    A reviewer wonders, for example, why I do not discuss the consequences of using AI assistants to outsource moral decision-making. There are several reasons for this. The most pertinent is that I have discussed moral outsourcing as a specific problem in another paper (Danaher 2016b) and, as I point out in that paper, I suspect discussions of moral outsourcing to AI will raise similar issues to those already discussed in the expansive literature on the use of enhancement technologies to improve moral decision-making (for a similar analysis, coupled with a defence of the use of AI moral assistance, see Giublini and Savulescu 2018). That said, some of what I say below about degeneration, autonomy and interpersonal virtue will be also be relevant to debates about the use of moral AI assistance.

  4. 4.

    I am indebted to Miles Brundage for suggesting this line of argument to me. We write about it in more detail on my webpage:

  5. 5.

    I am indebted to an anonymous reviewer for suggesting the distinction between personalization and manipulation. As they pointed out, personalization also has costs, e.g. a filter bubble that serves to reinforce prejudices, that may not be desirable in a pluralistic, democratic society, but it’s not clear that those problems are best understood in terms of a threat to autonomy. Cass Sunstein’s #Republic (2017) explores the political fallout of filter bubbles in more detail.

  6. 6.

    As Hare and Vincent point out, while humans may be bad at predicting whether a future option will makes us happy, our judgment as to whether a chosen option has made us happy is, effectively, incorrigible. Nobody knows better than ourselves. It is to this latter type of judgment that I appeal in this argument.

  7. 7.

    As a reviewer points out, it may be impossible for interpersonal communication to ever adequately capture one’s true feelings. This may well be right but if so it would seem to be a problem for both automated and non-automated communications alike.


  1. Burgos, D, Van Nimwegen, C, Van Oostendorp, H. and Koper, R. (2007). Game-based learning and immediate feedback. The case study of the Planning Educational Task. International Journal of Advanced Technology in Learning Available at (accessed 29/11/2016).

  2. Burrell, J. (2016). How the machine thinks: Understanding opacity in machine learning systems. Big Data and Society.

  3. Carr, N. (2014). The glass cage: Where automation is taking us. London: The Bodley Head.

    Google Scholar 

  4. Crawford, M. (2015). The world beyond your head. New York: Farrar, Strauss and Giroux.

    Google Scholar 

  5. Danaher, J. (2016a). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology, 29(3), 245–268.

    Article  Google Scholar 

  6. Danaher, J. (2016b). Why internal moral enhancement might be politically better than external moral enhancement. Neuroethics.

  7. Dworkin, G. (1988). The theory and practice of autonomy. Cambridge: CUP.

    Google Scholar 

  8. Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20.

  9. Frischmann, B. (2014). Human-focused Turing tests: A framework for judging nudging and the techno-social engineering of humans. Cardozo Legal Studies Research Paper No. 441 - available at (accessed 29/11/2016).

  10. Giublini, A., & Savulescu, J. (2018). The Artificial Moral Advisor. The 'Ideal Observer' meets Artificial Intelligence. Philosophy and Technology, 31(2):169–188.

    Article  Google Scholar 

  11. Hare, S., & Vincent, N. (2016). Happiness, cerebroscopes and incorrigibility: Prospects for Neuroeudaimonia. Neuroethics, 9(1), 69–84.

    Article  Google Scholar 

  12. Heersmink, R. (2015). Extended mind and cognitive enhancement: Moral aspects of extended cognition. Phenomenal Cognitive Science.

  13. Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information and categories. Review of Philosophical Psychology, 4(3), 465–481.

    Article  Google Scholar 

  14. Kelly, S., & Dreyfus, H. (2011). All things shining. New York: Free Press.

    Google Scholar 

  15. Kirsh, D. (2010). Thinking with external representations. AI and Society, 25, 441–454.

    Article  Google Scholar 

  16. Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence, 73, 31–68.

    Article  Google Scholar 

  17. Krakauer, D. (2016). Will AI harm us? Better to ask how we’ll reckon with our hybrid nature. Nautilus 6 September 2016 - available at (accessed 29/11/2016).

  18. Luper, S. (2014). Life’s meaning. In Luper (Ed.), The Cambridge Companion to Lie and Death. Cambridge: Cambridge University Press.

    Google Scholar 

  19. Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society.

  20. Morozov, E. (2013). The real privacy problem. MIT Technology Review. Available at (accessed 29/11/16).

  21. Mullainathan, S. and Shafir, E. (2014) Freeing up intelligence. Scientific American Mind Jan/Feb: 58–63.

  22. Mullainathan, S., & Shafir, E. (2012). Scarcity: The true cost of not having enough. London: Penguin.

    Google Scholar 

  23. Nagel, S. (2010). Too much of a good thing? Enhancement and the burden of self-determination. Neuroethics, 3, 109–119.

    Article  Google Scholar 

  24. Nass, C. and Flatow, I. (2013) The myth of multitasking. NPR: Talk of the Nation 10 May 2013 - available at (accessed 29/11/2016).

  25. van Nimwegen, C., Burgos, D., Oostendorp, H and Schijf, H. (2006). The paradox of the assisted user: Guidance can be counterproductive. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 917–926.

  26. Newport, C. (2016). Deep Work. New York: Grand Central Publishing.

    Google Scholar 

  27. Norman, D. (1991). Cognitive artifacts. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface. Cambridge: Cambridge University Press.

    Google Scholar 

  28. Ophir, E., Nass, C., & Wagner, A. (2009). Cognitive control in media multitaskers. PNAS, 107(37), 15583–15587.

    Article  Google Scholar 

  29. Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. PNAS, 107(Suppl 2), 8993–8999.

    Article  Google Scholar 

  30. Plato. The Phaedrus. From Plato in Twelve Volumes, Vol. 9, translated by Harold N. Fowler. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1925. Available at (accessed 29/11/2016).

  31. Raz, J. (1986). The morality of freedom. Oxford: OUP.

    Google Scholar 

  32. Russell, S. and Norvig, P. (2016) Artificial intelligence: A modern approach (Global 3rd edition). Essex: Pearson.

  33. Sandel, M. (2012). What money can’t buy: The moral limits of markets. London: Penguin.

    Google Scholar 

  34. Scheibehenne, B., Greifeneder, R., & Todd, P. M. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37, 409–425.

    Article  Google Scholar 

  35. Scherer, M. (2016). Regulating artificial intelligence systems: Challenges, competencies and strategies. Harvard Journal of Law and Technology, 29(2), 354–400.

    Google Scholar 

  36. Schwartz, B. (2004). The paradox of choice: Why less is more. New York, NY: Harper Collins.

    Google Scholar 

  37. Selinger, E. and Frischmann, B. (2016). The dangers of Smart Communication Technology. The Arc Mag 13 September 2016 - available at (accessed 29/11/2016).

  38. Selinger, E. (2014a). Today’s Apps are Turning us Into Sociopaths. WIRED 26 February 2014 - available at (accessed 29/11/2016).

  39. Selinger, E. (2014b). Don’t outsource your dating Life. CNN: Edition 2 May 2014 - available at (accessed 29/11/2016).

  40. Selinger, E. (2014c). Outsourcing Your Mind and Intelligence to Computer/Phone Apps. Institute for Ethics and Emerging Technologies 8 April 2014 - available at (accessed 29/11/2014).

  41. Shah, A. K., Mullainathan, S., & Shafir, E. (2012). Some consequences of having too little. Science, 338, 682–685.

    Article  Google Scholar 

  42. Slamecka, N., & Graf, P. (1978). The generation effect: The delineation of a phenomenon. Journal of Experimental Psychology: Human Learning and Memory., 4(6), 592–604.

    Google Scholar 

  43. Smuts, A. (2013). The good cause account of the meaning of life. Southern Philosophy Journal, 51(4), 536–562.

    Article  Google Scholar 

  44. Sunstein, C. (2016). The ethics of influence. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  45. Sunstein, C. (2017). # Republic: Divided democracy in an age of social media. Princeton, NJ: Princeton University Press.

    Google Scholar 

  46. Thaler, R., & Sunstein, C. (2009). Nudge: Improving decisions about health, wealth and happiness. London: Penguin.

    Google Scholar 

  47. Wertheimer, A. (1987). Coercion. Princeton, NJ: Princeton University Press.

    Google Scholar 

  48. Whitehead, A. N. (1911). An introduction to mathematics. London: Williams and Norgate.

    Google Scholar 

  49. Wu, T. (2017). The Attention Merchants. New York: Atlantica.

    Google Scholar 

  50. Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication and Society, 20(1), 118–136.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to John Danaher.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Toward an Ethics of AI Assistants: an Initial Framework. Philos. Technol. 31, 629–653 (2018).

Download citation


  • Artificial intelligence
  • Degeneration
  • Cognitive outsourcing
  • Embodied cognition
  • Autonomy
  • Interpersonal communications