Advertisement

Toward an Ethics of AI Assistants: an Initial Framework

Research Article

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

Keywords

Artificial intelligence Degeneration Cognitive outsourcing Embodied cognition Autonomy Interpersonal communications 

References

  1. Burgos, D, Van Nimwegen, C, Van Oostendorp, H. and Koper, R. (2007). Game-based learning and immediate feedback. The case study of the Planning Educational Task. International Journal of Advanced Technology in Learning Available at http://hdl.handle.net/1820/945 (accessed 29/11/2016).
  2. Burrell, J. (2016). How the machine thinks: Understanding opacity in machine learning systems. Big Data and Society.  https://doi.org/10.1177/2053951715622512.
  3. Carr, N. (2014). The glass cage: Where automation is taking us. London: The Bodley Head.Google Scholar
  4. Crawford, M. (2015). The world beyond your head. New York: Farrar, Strauss and Giroux.Google Scholar
  5. Danaher, J. (2016a). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology, 29(3), 245–268.CrossRefGoogle Scholar
  6. Danaher, J. (2016b). Why internal moral enhancement might be politically better than external moral enhancement. Neuroethics.  https://doi.org/10.1007/s12152-016-9273-8
  7. Dworkin, G. (1988). The theory and practice of autonomy. Cambridge: CUP.CrossRefGoogle Scholar
  8. Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20.Google Scholar
  9. Frischmann, B. (2014). Human-focused Turing tests: A framework for judging nudging and the techno-social engineering of humans. Cardozo Legal Studies Research Paper No. 441 - available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2499760 (accessed 29/11/2016).
  10. Giublini, A., & Savulescu, J. (2018). The Artificial Moral Advisor. The 'Ideal Observer' meets Artificial Intelligence. Philosophy and Technology, 31(2):169–188.CrossRefGoogle Scholar
  11. Hare, S., & Vincent, N. (2016). Happiness, cerebroscopes and incorrigibility: Prospects for Neuroeudaimonia. Neuroethics, 9(1), 69–84.CrossRefGoogle Scholar
  12. Heersmink, R. (2015). Extended mind and cognitive enhancement: Moral aspects of extended cognition. Phenomenal Cognitive Science.  https://doi.org/10.1007/s11097-015-9448-5.
  13. Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information and categories. Review of Philosophical Psychology, 4(3), 465–481.CrossRefGoogle Scholar
  14. Kelly, S., & Dreyfus, H. (2011). All things shining. New York: Free Press.Google Scholar
  15. Kirsh, D. (2010). Thinking with external representations. AI and Society, 25, 441–454.CrossRefGoogle Scholar
  16. Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence, 73, 31–68.CrossRefGoogle Scholar
  17. Krakauer, D. (2016). Will AI harm us? Better to ask how we’ll reckon with our hybrid nature. Nautilus 6 September 2016 - available at http://nautil.us/blog/will-ai-harm-us-better-to-ask-how-well-reckon-with-our-hybrid-nature (accessed 29/11/2016).
  18. Luper, S. (2014). Life’s meaning. In Luper (Ed.), The Cambridge Companion to Lie and Death. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  19. Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society.  https://doi.org/10.1177/2053951716679679.
  20. Morozov, E. (2013). The real privacy problem. MIT Technology Review. Available at http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/ (accessed 29/11/16).
  21. Mullainathan, S. and Shafir, E. (2014) Freeing up intelligence. Scientific American Mind Jan/Feb: 58–63.Google Scholar
  22. Mullainathan, S., & Shafir, E. (2012). Scarcity: The true cost of not having enough. London: Penguin.Google Scholar
  23. Nagel, S. (2010). Too much of a good thing? Enhancement and the burden of self-determination. Neuroethics, 3, 109–119.CrossRefGoogle Scholar
  24. Nass, C. and Flatow, I. (2013) The myth of multitasking. NPR: Talk of the Nation 10 May 2013 - available at http://www.npr.org/2013/05/10/182861382/the-myth-of-multitasking (accessed 29/11/2016).
  25. van Nimwegen, C., Burgos, D., Oostendorp, H and Schijf, H. (2006). The paradox of the assisted user: Guidance can be counterproductive. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 917–926.Google Scholar
  26. Newport, C. (2016). Deep Work. New York: Grand Central Publishing.Google Scholar
  27. Norman, D. (1991). Cognitive artifacts. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface. Cambridge: Cambridge University Press.Google Scholar
  28. Ophir, E., Nass, C., & Wagner, A. (2009). Cognitive control in media multitaskers. PNAS, 107(37), 15583–15587.CrossRefGoogle Scholar
  29. Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. PNAS, 107(Suppl 2), 8993–8999.CrossRefGoogle Scholar
  30. Plato. The Phaedrus. From Plato in Twelve Volumes, Vol. 9, translated by Harold N. Fowler. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1925. Available at http://www.english.illinois.edu/-people-/faculty/debaron/482/482readings/phaedrus.html (accessed 29/11/2016).
  31. Raz, J. (1986). The morality of freedom. Oxford: OUP.Google Scholar
  32. Russell, S. and Norvig, P. (2016) Artificial intelligence: A modern approach (Global 3rd edition). Essex: Pearson.Google Scholar
  33. Sandel, M. (2012). What money can’t buy: The moral limits of markets. London: Penguin.Google Scholar
  34. Scheibehenne, B., Greifeneder, R., & Todd, P. M. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37, 409–425.CrossRefGoogle Scholar
  35. Scherer, M. (2016). Regulating artificial intelligence systems: Challenges, competencies and strategies. Harvard Journal of Law and Technology, 29(2), 354–400.Google Scholar
  36. Schwartz, B. (2004). The paradox of choice: Why less is more. New York, NY: Harper Collins.Google Scholar
  37. Selinger, E. and Frischmann, B. (2016). The dangers of Smart Communication Technology. The Arc Mag 13 September 2016 - available at https://thearcmag.com/the-danger-of-smart-communication-technology-c5d7d9dd0f3e#.3yuhicpw8 (accessed 29/11/2016).
  38. Selinger, E. (2014a). Today’s Apps are Turning us Into Sociopaths. WIRED 26 February 2014 - available at https://www.wired.com/2014/02/outsourcing-humanity-apps/ (accessed 29/11/2016).
  39. Selinger, E. (2014b). Don’t outsource your dating Life. CNN: Edition 2 May 2014 - available at http://edition.cnn.com/2014/05/01/opinion/selinger-outsourcing-activities/index.html (accessed 29/11/2016).
  40. Selinger, E. (2014c). Outsourcing Your Mind and Intelligence to Computer/Phone Apps. Institute for Ethics and Emerging Technologies 8 April 2014 - available at http://ieet.org/index.php/IEET/more/selinger20140408 (accessed 29/11/2014).
  41. Shah, A. K., Mullainathan, S., & Shafir, E. (2012). Some consequences of having too little. Science, 338, 682–685.CrossRefGoogle Scholar
  42. Slamecka, N., & Graf, P. (1978). The generation effect: The delineation of a phenomenon. Journal of Experimental Psychology: Human Learning and Memory., 4(6), 592–604.Google Scholar
  43. Smuts, A. (2013). The good cause account of the meaning of life. Southern Philosophy Journal, 51(4), 536–562.CrossRefGoogle Scholar
  44. Sunstein, C. (2016). The ethics of influence. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  45. Sunstein, C. (2017). # Republic: Divided democracy in an age of social media. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
  46. Thaler, R., & Sunstein, C. (2009). Nudge: Improving decisions about health, wealth and happiness. London: Penguin.Google Scholar
  47. Wertheimer, A. (1987). Coercion. Princeton, NJ: Princeton University Press.Google Scholar
  48. Whitehead, A. N. (1911). An introduction to mathematics. London: Williams and Norgate.Google Scholar
  49. Wu, T. (2017). The Attention Merchants. New York: Atlantica.Google Scholar
  50. Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication and Society, 20(1), 118–136.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of LawNUI GalwayGalwayIreland

Personalised recommendations