Skip to main content

Navigating the Landscape of AI Ethics and Responsibility

  • Conference paper
  • First Online:
Progress in Artificial Intelligence (EPIA 2023)

Abstract

Artificial intelligence (AI) has been widely used in many fields, from intelligent virtual assistants to medical diagnosis. However, there is no consensus on how to deal with ethical issues. Using a systematic literature review and an analysis of recent real-world news about AI-infused systems, we cluster existing and emerging AI ethics and responsibility issues into six groups - broken systems, hallucinations, intellectual property rights violations, privacy and regulation violations, enabling malicious actors and harmful actions, environmental and socioeconomic harms - discuss implications, and conclude that the problem needs to be reflected upon and addressed across five partially overlapping dimensions: Research, Education, Development, Operation, and Business Model. This reflection may be relevant to caution of potential dangers and frame further research at a time when products and services based on AI exhibit explosive growth. Moreover, exploring effective ways to involve users and civil society in discussions on the impact and role of AI systems could help increase trust and understanding of these technologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hu, K.: ChatGPT sets record for fastest-growing user base - analyst note (2023). https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/. Accessed 2023

  2. Future of Life: Pause Giant AI Experiments: An Open Letter (2023). https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 2023

  3. Ordonez, V., Dunn, T., Noll, E.: OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: “A little bit scared of this” (2023). https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122. Accessed 2023

  4. Schiffer, Z., Newton, C.: Microsoft just laid off one of its responsible AI teams (2023). https://www.platformer.news/p/microsoft-just-laid-off-one-of-its. Accessed 2023

  5. DeGeurin, M.: Welp, There Goes Twitter’s Ethical AI Team, Among Others as Employees Post Final Messages(2022). https://gizmodo.com/twitter-layoffs-elon-musk-ai-ethics-1849743051. Accessed 2022

  6. Horwitz, J.: Facebook Parent Meta Platforms Cuts Responsible Innovation Team (2022). https://www.wsj.com/articles/facebook-parent-meta-platforms-cuts-responsible-innovation-team-11662658423. Accessed 2022

  7. Hayden, F.: One year after promising to double AI ethics team, Google is light on details (2022). https://www.emergingtechbrew.com/stories/2022/06/07/one-year-after-promising-to-double-ai-ethics-team-google-is-light-on-details. Accessed 2022

  8. European Commission: Artificial Intelligence for Europe (2018). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN. Accessed 2018

  9. European Commission: Coordinated Plan on Artificial Intelligence (2018). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0795&qid=1683368591075. Accessed 2018

  10. Lu, Q., Zhu, L., Xu, X., Whittle, J.: Responsible-AI-by-design: a pattern collection for designing responsible AI systems. IEEE Softw. 1–7 (2023). https://doi.org/10.1109/MS.2022.3233582

  11. UNESCO: Recomendation on the Ethics of Artificial Intelligence (2022). https://unesdoc.unesco.org/ark:/48223/pf0000381137. Accessed 2022

  12. Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Q. 26, xiii–xxiii (2002)

    Google Scholar 

  13. Kitchenham, B.: Procedures for Performing Systematic Reviews. (2004)

    Google Scholar 

  14. Ouzzani, M., Hammady, H., Fedorowicz, Z., Elmagarmid, A.: Rayyan—a web and mobile app for systematic reviews. Syst. Rev. 5, 210 (2016). https://doi.org/10.1186/s13643-016-0384-4

    Article  Google Scholar 

  15. Denzin, N.K.: The Research Act in Sociology: A Theoretical Introduction to Sociological Methods. Butterworths, London (1970)

    Google Scholar 

  16. Jick, T.D.: Mixing qualitative and quantitative methods: triangulation in action. Adm. Sci. Q. 24, 602–611 (1979). https://doi.org/10.2307/2392366

    Article  Google Scholar 

  17. Fink, A.: Survey Research Methods. In: Peterson, P., Baker, E., McGaw, B. (eds.) International Encyclopedia of Education, 3rd edn., pp. 152–160. Elsevier, Oxford (2010)

    Chapter  Google Scholar 

  18. Merhi, M.I.: An assessment of the barriers impacting responsible artificial intelligence. Inf. Syst. Front. (2022). https://doi.org/10.1007/s10796-022-10276-3

    Article  Google Scholar 

  19. Teixeira, S., Rodrigues, J., Veloso, B., Gama, J.: An exploratory diagnosis of artificial intelligence risks for a responsible governance. In: Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance, pp. 25–31. Association for Computing Machinery, New York, NY, USA (2022)

    Google Scholar 

  20. Brumen, B., Göllner, S., Tropmann-Frick, M.: Aspects and views on responsible artificial intelligence. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P., Di Fatta, G., Giuffrida, G., and Umeton, R. (eds.) Machine Learning, Optimization, and Data Science, pp. 384–398. Springer Nature Switzerland, Cham (2023)

    Google Scholar 

  21. Cachat-Rosset, G., Klarsfeld, A.: Diversity, equity, and inclusion in artificial intelligence: an evaluation of guidelines. Appl. Artif. Intell. 37, 2176618 (2023). https://doi.org/10.1080/08839514.2023.2176618

    Article  Google Scholar 

  22. Werder, K., Ramesh, B., Zhang, R. (Sophia): Establishing data provenance for responsible artificial intelligence systems. ACM Trans. Manag. Inf. Syst. 13, 22:1–22:23 (2022). https://doi.org/10.1145/3503488

  23. Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., Yang, Q.: Building ethics into artificial intelligence. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pp. 5527–5533. International Joint Conferences on Artificial Intelligence Organization, Stockholm, Sweden (2018)

    Google Scholar 

  24. Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P.-S., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L.A., Rimell, L., Isaac, W., Haas, J., Legassick, S., Irving, G., Gabriel, I.: Taxonomy of risks posed by language models. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229. Association for Computing Machinery, New York, NY, USA (2022)

    Google Scholar 

  25. Sham, A.H., et al.: Ethical AI in facial expression analysis: racial bias. SIViP 17, 399–406 (2023). https://doi.org/10.1007/s11760-022-02246-8

    Article  Google Scholar 

  26. Papakyriakopoulos, O., Xiang, A.: Considerations for ethical speech recognition datasets. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1287–1288. Association for Computing Machinery, New York, NY, USA (2023)

    Google Scholar 

  27. Minkkinen, M., Zimmer, M.P., Mäntymäki, M.: Co-shaping an ecosystem for responsible AI: five types of expectation work in response to a technological frame. Inf. Syst. Front. 25, 103–121 (2023). https://doi.org/10.1007/s10796-022-10269-2

    Article  Google Scholar 

  28. Gianni, R., Lehtinen, S., Nieminen, M.: Governance of responsible AI: from ethical guidelines to cooperative policies. Front. Comput. Sci. 4 (2022)

    Google Scholar 

  29. Abbu, H., Mugge, P., Gudergan, G.: Ethical considerations of artificial intelligence: ensuring fairness, transparency, and explainability. In: 2022 IEEE 28th International Conference on Engineering, Technology and Innovation (ICE/ITMC) & 31st International Association for Management of Technology (IAMOT) Joint Conference, pp. 1–7 (2022)

    Google Scholar 

  30. Treacy, S.: Mechanisms and constraints underpinning ethically aligned artificial intelligence systems: an exploration of key performance areas. In: Presented at the 3rd European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2021 (2021)

    Google Scholar 

  31. Trocin, C., Mikalef, P., Papamitsiou, Z., Conboy, K.: Responsible AI for digital health: a synthesis and a research agenda. Inf. Syst. Front. (2021). https://doi.org/10.1007/s10796-021-10146-4

    Article  Google Scholar 

  32. El-Sappagh, S., Alonso-Moral, J.M., Abuhmed, T., Ali, F., Bugarín-Diz, A.: Trustworthy artificial intelligence in Alzheimer’s disease: state of the art, opportunities, and challenges. Artif. Intell. Rev. (2023). https://doi.org/10.1007/s10462-023-10415-5

    Article  Google Scholar 

  33. Meerveld, H.W., Lindelauf, R.H.A., Postma, E.O., Postma, M.: The irresponsibility of not using AI in the military. Ethics Inf. Technol. 25, 14 (2023). https://doi.org/10.1007/s10676-023-09683-0

    Article  Google Scholar 

  34. Boulanin, V., Lewis, D.A.: Responsible reliance concerning development and use of AI in the military domain. Ethics Inf. Technol. 25, 8 (2023). https://doi.org/10.1007/s10676-023-09691-0

    Article  Google Scholar 

  35. Squadrone, L., Croce, D., Basili, R.: Ethics by design for intelligent and sustainable adaptive systems. In: Dovier, A., Montanari, A., Orlandini, A. (eds.) AIxIA 2022–Advances in Artificial Intelligence, pp. 154–167. Springer International Publishing, Cham (2023)

    Google Scholar 

  36. Varsha P.S.: How can we manage biases in artificial intelligence systems–a systematic literature review. Int. J. Inf. Manag. Data Insights 3, 100165 (2023). https://doi.org/10.1016/j.jjimei.2023.100165

  37. Belle, V.: Knowledge representation and acquisition for ethical AI: challenges and opportunities. Ethics Inf. Technol. 25, 22 (2023). https://doi.org/10.1007/s10676-023-09692-z

    Article  Google Scholar 

  38. Geiger, G.: How Denmark’s Welfare State Became a Surveillance Nightmare (2023). https://www.wired.com/story/algorithms-welfare-state-politics/. Accessed 2023

  39. Meaker, M.: The Fraud-Detection Business Has a Dirty Secret (2023). https://www.wired.com/story/welfare-fraud-industry/. Accessed 2022

  40. Harwell, D.: Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use (2019). https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/. Accessed 2019

  41. Burgess, M., Schot, E., Geiger, G.: This Algorithm Could Ruin Your Life (2023). https://www.wired.com/story/welfare-algorithms-discrimination/. Accessed 2023

  42. Awad, E., et al.: The moral machine experiment. Nature 563, 59–64 (2018). https://doi.org/10.1038/s41586-018-0637-6

    Article  Google Scholar 

  43. Taylor, M.: Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians (2016). https://www.caranddriver.com/news/a15344706/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/. Accessed 2016

  44. Smith, S.G.: Hallucinations Could Blunt ChatGPT’s Success (2023). https://spectrum.ieee.org/ai-hallucination#toggle-gdpr. Accessed 2023

  45. Wilkinson, D.: Be Careful… ChatGPT Appears to be Making up Academic References (2023). https://oxford-review.com/chatgpt-making-up-references/. Accessed 2023

  46. Petkauskas, V.: OpenAI ordered to delete ChatGPT over false death claims (2023). https://cybernews.com/news/openai-ordered-delete-chatgpt/. Accessed 2023

  47. Vincent, J.: Getty images is suing the creators of AI art tool stable diffusion for scraping its content (2023). https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit. Accessed 2023

  48. Vincent, J.: AI art tools stable diffusion and midjourney targeted with copyright lawsuit (2023). https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart. Accessed 2023

  49. Robinson, T.: The Wes Anderson artbot craze is a fun trend, but it clarifies AI art’s ethical issues (2022). https://www.polygon.com/23494958/wes-anderson-midjourney-ai-art-generator-viral-trend

  50. Welte, H.: The gpl-violations.org project. https://gpl-violations.org/

  51. Constantaras, E., Geiger, G., Justin-Casimir, B., Dhruv, M., Aung, H.: Inside the Suspicion Machine (2023). https://www.wired.com/story/welfare-state-algorithms/. Accessed 2023

  52. McCallum, S.: ChatGPT banned in Italy over privacy concerns (2023). https://www.bbc.com/news/technology-65139406. Accessed 2023

  53. Mukherjee, S., Pollina, E., More, R.: Italy’s ChatGPT ban attracts EU privacy regulators (2023). https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/. Accessed 2023

  54. Mauran, C.: Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT (2023). https://mashable.com/article/samsung-chatgpt-leak-details

  55. Cohen, M.: Workers are secretly using ChatGPT, AI and it will pose big risks for tech leaders (2023). https://www.cnbc.com/2023/04/30/the-big-cyber-risks-when-chatgpt-and-ai-are-secretly-used-by-employees.html. Accessed 2023

  56. Hill, K.: What Happens When Our Faces Are Tracked Everywhere We Go? (2021). https://www.nytimes.com/interactive/2021/03/18/magazine/facial-recognition-clearview-ai.html. Accessed 2021

  57. Heikkila, M.: Clearview scandal exposes limits of transatlantic AI collaboration (2021). https://www.politico.eu/article/clearview-scandal-exposes-limits-transatlantic-ai-facial-recognition-collaboration/. Accessed 2021

  58. ElevenLabs: ElevenLabs-Prime AI Text to Speech|Voice Cloning. https://beta.elevenlabs.io/

  59. O’Sullivan, D.: Inside the Pentagon’s race against deepfake videos. CNN Business (2019)

    Google Scholar 

  60. Debunking a deepfake video of Zelensky telling Ukrainians to surrender (2022). https://www.france24.com/en/tv-shows/truth-or-fake/20220317-deepfake-video-of-zelensky-telling-ukrainians-to-surrender-debunked. Accessed 2022

  61. iperov: DeepFaceLab (2023). https://github.com/iperov/DeepFaceLab

  62. Home Security Heroes: 2023 Password Cracking: How Fast Can AI Crack Passwords? https://www.homesecurityheroes.com/ai-password-cracking/

  63. McNeal, R.: A novice just used ChatGPT to create terrifyingly sophisticated malware (2023). https://www.androidauthority.com/chatgpt-malware-3310791/. Accessed 2023

  64. Merian, D.: ChatGPT Hacking Prompts, SQLi, XSS, Vuln Analysis, Nuclei Templates, and more (2023). https://systemweakness.com/chatgpt-hacking-prompts-sqli-xss-vuln-analysis-nuclei-templates-and-more-dba6fa839a45. Accessed 2023

  65. Roose, K., Newton, C., Land, D., Cohn, R., Poyant, J., Moxley, A., Powell, D., Lozano, M., Niemisto, R.: Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT (2023). https://www.nytimes.com/2023/03/31/podcasts/hard-fork-sundar.html. Accessed 2023

  66. Blain, L.: The genie escapes: Stanford copies the ChatGPT AI for less than $600 (2023). https://newatlas.com/technology/stanford-alpaca-cheap-gpt/. Accessed 2023

  67. Gravitas, S.: Auto-GPT: An Autonomous GPT-4 Experiment (2023). https://github.com/Significant-Gravitas/Auto-GPT. Accessed 2023

  68. Walsh, P., Bera, J., Sharma, V.S., Kaulgud, V., Rao, R.M., Ross, O.: Sustainable AI in the cloud: exploring machine learning energy use in the cloud. In: 2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pp. 265–266 (2021)

    Google Scholar 

  69. Popper, K.R.: The moral responsibility of the scientist. Bull. Peace Propos. 2, 279–283 (1971). https://doi.org/10.1177/096701067100200311

    Article  Google Scholar 

  70. Sample, I.: Maths and tech specialists need Hippocratic oath, says academic (2019). https://www.theguardian.com/science/2019/aug/16/mathematicians-need-doctor-style-hippocratic-oath-says-academic-hannah-fry. Accessed 2023

Download references

Acknowledgments

This work was partially funded by FCT-Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit-UIDB/00326/2020 or project code UIDP/00326/2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacinto Estima .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cunha, P.R., Estima, J. (2023). Navigating the Landscape of AI Ethics and Responsibility. In: Moniz, N., Vale, Z., Cascalho, J., Silva, C., Sebastião, R. (eds) Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in Computer Science(), vol 14115. Springer, Cham. https://doi.org/10.1007/978-3-031-49008-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-49008-8_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-49007-1

  • Online ISBN: 978-3-031-49008-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics