Skip to main content
Log in

Values in AI: bioethics and the intentions of machines and people

  • Commentary
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. The intentions of creators as well as investors may not comport with the values of users and broader society. Users also may mean to use a technological device in an illicit or unexpected way. Devices change people's intentions as they are empowered by technology. What people mean to do with the help of technology reflects their choices, preferences, and values. Technology is a disruptor that impacts society as a whole. Without knowing who intends to do what, it is difficult to rely on the creators of technology to choose methods and create products that comport with user and broader societal values. The AI is programmed to accomplish tasks according to chosen values or is doing so through machine learning and deep learning. We assert that AI is quasi-intentional and changes people's intentions. Investors wishing to promote or preserve public health, wellbeing, and wellness should invest in ethical, responsible technology. Environmental, social, and governance (ESG) considerations and metrics should include ethical technology, wellness, public health, and societal wellbeing. This paper concludes that the process by which technology creators infuse values should be couched in bioethical and general ethical considerations, reflective of potential multiple intentions, and should entail a willingness and process to adapt the AI after the fact as the circumstances of its use change.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Availability of data and materials

Not applicable.

References

  1. Bacciarelli, A., et al.: The Toronto Declaration: Protecting the Right to Equality and Non-discrimination in Machine Learning Systems. Amnesty International, Toronto (2018)

    Google Scholar 

  2. Beauchamp, T., Childress, J.: Principles of Biomedical Ethics, 8th edn. Oxford University Press, New York (2012)

    Google Scholar 

  3. Bostrom, N.: Ethical issues in advanced artificial intelligence. In: Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence. Institute of Advanced Studies in Systems Research and Cybernetics, pp. 12–17 (2003)

  4. B-Tech Project: The UN guiding principles in the age of technology: A B-Tech Foundational Paper, s.l.: United Nations Human Rights Office of the High Commissioner (2020)

  5. Dawson, A., Jordens, C., Macneill, P., Zion, D.: Bioethics and the myth of neutrality. J. Bioethic. Inquiry 15, 483–486 (2018)

    Article  Google Scholar 

  6. Douglas, H.: Science, policy, and the value-free ideal. University of Pittsburgh Press, Pittsburgh (2009)

    Book  Google Scholar 

  7. Eliot, L.: On the beguiling question of whether AI can form intent, including the case of self-driving cars. Forbes, 6 June (2020)

  8. Fjeld, J., et al.: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. The Berkman Klein Center, Cambridge (2020)

    Google Scholar 

  9. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1) (2019)

  10. Fulford, K., van Staden, C., Crisp, R.: Values-based practice: Topsy-Turvy take-home messages from ordinary language philosophy (and a few next steps). In: The Oxford Handbook of Philosophy and Psychiatry. Oxford University Press, Oxford (2013)

  11. Han, S., Kelly, E., Nikou, S. & Svee, E., 2021. Aligning artificial intelligence with human values: reflections from a phenomenological perspective. AI & SOCIETY.

  12. Hoehner, P.: The myth of value neutrality. Ethics J. Am. Med. Assoc. Virt. Mentor 8(5), 341–344 (2006)

    Google Scholar 

  13. Jonsen, A.: History of bioethics as discipline and discourse. In: Bioethics: An Introduction to the History, Methods, and Practice. Burlington: Jones and Bartlett Learning, pp. 3–16 (2012)

  14. Kluckhohn, C.: Values and Value-Orientations in the Theory of Action: An Exploration in Definition and Classification. Harvard University Press, Cambridge (2013)

    Google Scholar 

  15. Miller, B.: Is technology value-neutral? Sci. Technol. Hum. Values 46(1), 53–80 (2021)

    Article  Google Scholar 

  16. Nabi, J.: How bioethics can shape artificial intelligence and machine learning. Hastings Cent. Rep. 48(5), 10–13 (2018)

    Article  Google Scholar 

  17. Neuman, P.: Some comments on the distinction between intention and intentionality. Behav. Analyst MABA 30(2), 211–216 (2007)

    Article  Google Scholar 

  18. Nurwidyantoro, A. et al., 2022. Human values in software development artefacts: A case study on issue discussions in three Android applications. Information and Software Technology, Volume 141, p. edo106731.

  19. Oxford University Press. Oxford Reference. [Online] 2022. https://www.oxfordreference.com/view/https://doi.org/10.1093/oi/authority.20110803115135202. Accessed 11 Nov 2022

  20. Schwartz, S.: An overview of the schwartz theory of basic values. Online Read. Psychol. Cult. (2012). https://doi.org/10.9707/2307-0919.1116

    Article  Google Scholar 

  21. Schwartz, S., Bilsky, W.: Toward a universal psychological structure of human values. J. Pers. Soc. Psychol. 53(3), 550–562 (1987)

    Article  Google Scholar 

  22. Sundström, P.: Interpreting the notion that technology is value-neutral. Med. Health Care Philos. 1, 41–45 (1998)

    Article  Google Scholar 

  23. UNESCO: Intergovernmental meeting of experts (category II) related to a draft recommendation on the ethics of artificial intelligence. Online: Intergovernmental Meeting of Experts (Category II) related to a Draft Recommendation on the Ethics of Artificial Intelligence (2021)

  24. van Staden, W.: Crucial to optimal learning and practice of ethics: virtuous relationships and diligent processes that account for both shared and conflicting values. Philos. Psychiatry Psychol. 26(3), 203–206 (2019)

    Article  MathSciNet  Google Scholar 

  25. Verbeek, P.-P.: Moralizing Technology: Understanding and Designing the Morality of Things. University of Chicago Press, Chicago (2011)

    Book  Google Scholar 

  26. Ward, A., Bruce, A.: The Tech That Comes Next: How Changemakers, Philanthropists, and Technologists Can Build an Equitable World. Wiley, Hoboken (2022)

    Google Scholar 

Download references

Acknowledgements

We have no acknowledgements.

Funding

We have received no funding for the article.

Author information

Authors and Affiliations

Authors

Contributions

Each author contributed to the conceptualization, argument, and writing.

Corresponding author

Correspondence to Anne Zimmerman.

Ethics declarations

Conflict of interest

We have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zimmerman, A., Janhonen, J., Saadeh, M. et al. Values in AI: bioethics and the intentions of machines and people. AI Ethics 3, 1003–1012 (2023). https://doi.org/10.1007/s43681-022-00242-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00242-9

Keywords

Navigation