Abstract
Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. The intentions of creators as well as investors may not comport with the values of users and broader society. Users also may mean to use a technological device in an illicit or unexpected way. Devices change people's intentions as they are empowered by technology. What people mean to do with the help of technology reflects their choices, preferences, and values. Technology is a disruptor that impacts society as a whole. Without knowing who intends to do what, it is difficult to rely on the creators of technology to choose methods and create products that comport with user and broader societal values. The AI is programmed to accomplish tasks according to chosen values or is doing so through machine learning and deep learning. We assert that AI is quasi-intentional and changes people's intentions. Investors wishing to promote or preserve public health, wellbeing, and wellness should invest in ethical, responsible technology. Environmental, social, and governance (ESG) considerations and metrics should include ethical technology, wellness, public health, and societal wellbeing. This paper concludes that the process by which technology creators infuse values should be couched in bioethical and general ethical considerations, reflective of potential multiple intentions, and should entail a willingness and process to adapt the AI after the fact as the circumstances of its use change.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Availability of data and materials
Not applicable.
References
Bacciarelli, A., et al.: The Toronto Declaration: Protecting the Right to Equality and Non-discrimination in Machine Learning Systems. Amnesty International, Toronto (2018)
Beauchamp, T., Childress, J.: Principles of Biomedical Ethics, 8th edn. Oxford University Press, New York (2012)
Bostrom, N.: Ethical issues in advanced artificial intelligence. In: Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence. Institute of Advanced Studies in Systems Research and Cybernetics, pp. 12–17 (2003)
B-Tech Project: The UN guiding principles in the age of technology: A B-Tech Foundational Paper, s.l.: United Nations Human Rights Office of the High Commissioner (2020)
Dawson, A., Jordens, C., Macneill, P., Zion, D.: Bioethics and the myth of neutrality. J. Bioethic. Inquiry 15, 483–486 (2018)
Douglas, H.: Science, policy, and the value-free ideal. University of Pittsburgh Press, Pittsburgh (2009)
Eliot, L.: On the beguiling question of whether AI can form intent, including the case of self-driving cars. Forbes, 6 June (2020)
Fjeld, J., et al.: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. The Berkman Klein Center, Cambridge (2020)
Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1) (2019)
Fulford, K., van Staden, C., Crisp, R.: Values-based practice: Topsy-Turvy take-home messages from ordinary language philosophy (and a few next steps). In: The Oxford Handbook of Philosophy and Psychiatry. Oxford University Press, Oxford (2013)
Han, S., Kelly, E., Nikou, S. & Svee, E., 2021. Aligning artificial intelligence with human values: reflections from a phenomenological perspective. AI & SOCIETY.
Hoehner, P.: The myth of value neutrality. Ethics J. Am. Med. Assoc. Virt. Mentor 8(5), 341–344 (2006)
Jonsen, A.: History of bioethics as discipline and discourse. In: Bioethics: An Introduction to the History, Methods, and Practice. Burlington: Jones and Bartlett Learning, pp. 3–16 (2012)
Kluckhohn, C.: Values and Value-Orientations in the Theory of Action: An Exploration in Definition and Classification. Harvard University Press, Cambridge (2013)
Miller, B.: Is technology value-neutral? Sci. Technol. Hum. Values 46(1), 53–80 (2021)
Nabi, J.: How bioethics can shape artificial intelligence and machine learning. Hastings Cent. Rep. 48(5), 10–13 (2018)
Neuman, P.: Some comments on the distinction between intention and intentionality. Behav. Analyst MABA 30(2), 211–216 (2007)
Nurwidyantoro, A. et al., 2022. Human values in software development artefacts: A case study on issue discussions in three Android applications. Information and Software Technology, Volume 141, p. edo106731.
Oxford University Press. Oxford Reference. [Online] 2022. https://www.oxfordreference.com/view/https://doi.org/10.1093/oi/authority.20110803115135202. Accessed 11 Nov 2022
Schwartz, S.: An overview of the schwartz theory of basic values. Online Read. Psychol. Cult. (2012). https://doi.org/10.9707/2307-0919.1116
Schwartz, S., Bilsky, W.: Toward a universal psychological structure of human values. J. Pers. Soc. Psychol. 53(3), 550–562 (1987)
Sundström, P.: Interpreting the notion that technology is value-neutral. Med. Health Care Philos. 1, 41–45 (1998)
UNESCO: Intergovernmental meeting of experts (category II) related to a draft recommendation on the ethics of artificial intelligence. Online: Intergovernmental Meeting of Experts (Category II) related to a Draft Recommendation on the Ethics of Artificial Intelligence (2021)
van Staden, W.: Crucial to optimal learning and practice of ethics: virtuous relationships and diligent processes that account for both shared and conflicting values. Philos. Psychiatry Psychol. 26(3), 203–206 (2019)
Verbeek, P.-P.: Moralizing Technology: Understanding and Designing the Morality of Things. University of Chicago Press, Chicago (2011)
Ward, A., Bruce, A.: The Tech That Comes Next: How Changemakers, Philanthropists, and Technologists Can Build an Equitable World. Wiley, Hoboken (2022)
Acknowledgements
We have no acknowledgements.
Funding
We have received no funding for the article.
Author information
Authors and Affiliations
Contributions
Each author contributed to the conceptualization, argument, and writing.
Corresponding author
Ethics declarations
Conflict of interest
We have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zimmerman, A., Janhonen, J., Saadeh, M. et al. Values in AI: bioethics and the intentions of machines and people. AI Ethics 3, 1003–1012 (2023). https://doi.org/10.1007/s43681-022-00242-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00242-9