Abstract
The article has the goal of indicating how to harness the potential for good of artificial intelligence (AI) – defined as a distinct form of autonomous and self- learning agency and thus raises unique ethical challenges – while mitigating its ethical challenges. The analyses focuses first on uses of AI that may lead to undue discrimination, lack of explainability, the responsibility gap, and the nudging potential of AI and its negative impact on human self-determination. It then turns on the role that ethical analyses in harnessing the potential for good of AI and argues that existing guidelines for the ethics design, development and use of AI will be effective insofar as they are translated into viable guidelines to shape AI-based innovation and that this is the task of digital ethics as a translational ethics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References and Notes
Asaro, Peter. 2012. On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 94 (886): 687–709. https://doi.org/10.1017/S1816383112000768.
Cowls, Josh, and Luciano Floridi. 2018. Prolegomena to a white paper on an ethical framework for a good Ai society, SSRN scholarly paper ID 3198732. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3198732.
Floridi, Luciano. 2016. Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2083): 20160112. https://doi.org/10.1098/rsta.2016.0112.
IEEE Standards Association. n.d. Ethically aligned design, version 2. https://ethicsinaction.ieee.org/.
Jeff Larson, Julia Angwin. 2016. How we analyzed the COMPAS recidivism algorithm. Text/html.
McCarthy, John, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. 2006. A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine 27 (4): 12. https://doi.org/10.1609/aimag.v27i4.1904.
Pagallo, Ugo. 2013. What robots want: Autonomous machines, codes and new Frontiers of legal responsibility. In Human law and computer law: Comparative perspectives, ed. Mireille Hildebrandt and Jeanne Gaakeer, 47–65. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-94-007-6314-2_3.
Primiero, Giuseppe, and Mariarosaria Taddeo. 2012. A modal type theory for formalizing trusted communications. Journal of Applied Logic 10 (1): 92–114. https://doi.org/10.1016/j.jal.2011.12.002.
Russell, Stuart. 2015. Robotics: Ethics of artificial intelligence: Take a stand on AI weapons. Nature 521 (7553): 415–418. https://doi.org/10.1038/521415a.
Samuel, Arthur L. 1960. Some moral and technical consequences of automation–a refutation. Science 132 (3429): 741–742. https://doi.org/10.1126/science.132.3429.741.
Shirado, Hirokazu, and Nicholas A. Christakis. 2017. Locally Noisy autonomous agents improve global human coordination in network experiments. Nature 545 (7654): 370–374. https://doi.org/10.1038/nature22332.
Taddeo, Mariarosaria, and Luciano Floridi. 2018. Regulate artificial intelligence to avert cyber arms race. Nature 556 (7701): 296–298. https://doi.org/10.1038/d41586-018-04602-6.
Wang, Dayong, Aditya Khosla, Rishab Gargeya, Humayun Irshad, and Andrew H. Beck. 2016. Deep learning for identifying metastatic breast cancer. ArXiv:1606.05718 [Cs, q-Bio], June. http://arxiv.org/abs/1606.05718.
Wiener, N. 1960. Some moral and technical consequences of automation. Science 131 (3410): 1355–1358. https://doi.org/10.1126/science.131.3410.1355.
Yang, Guang-Zhong, Jim Bellingham, Pierre E. Dupont, Peer Fischer, Luciano Floridi, Robert Full, Neil Jacobstein, et al. 2018. The grand challenges of science robotics. Science robotics 3 (14): eaar7650. https://doi.org/10.1126/scirobotics.aar7650.
Acknowledgments
M.T. and L.F.are members of the Partnership on Artificial Intelligence to Benefit People and Society; L.F. is also chair of the scientific committee of Al4People.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Taddeo, M., Floridi, L. (2021). How AI Can Be a Force for Good – An Ethical Framework to Harness the Potential of AI While Keeping Humans in Control. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)