Abstract
One of the ultimate problems of moral philosophy is to determine who or what is worth moral consideration or not. “Morality” is a relative concept, which changes significantly with the environment and time. This means that morality is incredibly inclusive. The emergence of AI technology has a significant impact on the understanding and distribution of “subject,” which has produced a new situation in moral issues. When considering the morality of AI, moral problems must also involve moral agents and moral patients. A more inclusive moral definition is necessary for extending the scope of moral consideration to other traditionally marginalized entities. The evolving ethics redefines the center of moral consideration, effectively reduces the differences, becomes more inclusive, and includes more potential participants. But we may still need to jump out of this binary framework and solve the problem by rewriting rules. It is a huge, complex systematic project to realize moral AI in education. To be a “trustworthy” and “responsible” companion of teachers and students, educational AI must have extensive consistency with teachers and students in terms of the moral theoretical basis and expected value. The deep integration of AI and education is likely to become the development trend of education in the future.
Similar content being viewed by others
References
Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon and Schuster.
Baber, C. (2003). Cognition and tool use: Forms of engagement in human and animal use of tools. Routledge-Taylor & Francis Group.
Baujard, A. (2009). A return to Bentham’s felicific calculus: From moral welfarism to technical non-welfarism. The European Journal of the History of Economic Thought, 16(3), 431–453. https://doi.org/10.1080/09672560903101294.
Beauchamp, T. L., & Frey, R. G. (2011). The Oxford handbook of animal ethics. Oxford University Press.
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In F. Keith & M. R. William (Eds.), The Cambridge handbook of artificial intelligence. Cambridge University Press.
Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. John Benjamins Publishing Company.
Burri, T., & Trusilo, D. (2021). Ethical artificial intelligence: An approach to evaluating disembodied autonomous systems. In Rain, L., & Ann, V. (Eds), Autonomous cyber capabilities in international law, forthcoming. Available on SSRN: https://ssrn.com/abstract=3816687 .
Butler, S. (1912). The note-books of Samuel Butler. (n.d.). Retrieved January 20,2021, from http://www.public-library.uk/pdfs/7/933.pdf .
Calarco, M. (2008). Zoographies: The question of the animal from Heidegger to Derrida. Columbia University Press.
Calverley, D. J. (2006). Android science and animal rights, does an analogy exist? Connection Science, 18(4), 403–417. https://doi.org/10.1080/09540090600879711.
Cavalier, R. J. (2005). Impact of the internet on our moral lives. State University of New York Press.
Channell, D. F. (1991). The vital machine: A study of technology and organic life. Oxford University Press.
Chaudhuri, S., & Vardi, M. (2014). Reasoning about machine ethics. Principles of programming languages (POPL)-off the beaten track (OBT). Retrieved January 20,2021, from https://popl-obt-2014.cs.brown.edu/papers/ethics.pdf .
Descartes, R. (1988). Descartes: Selected philosophical writings. Cambridge University Press.
Floridi, L. (2010). The Cambridge handbook of information and computer ethics. Cambridge University Press.
Floridi, L. (2013). The ethics of information. Oxford University Press.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d.
Habermas, J. (1998). The inclusion of the other: Studies in political theory. MIT Press.
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5–14. https://doi.org/10.1177/0008125619864925.
Hajdin, M. (1994). The boundaries of moral discourse. Loyola University Press.
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99. https://doi.org/10.1007/s10676-009-9184-z.
Himma, K. E. (2004). There’s something about Mary: The moral value of things qua information objects. Ethics and Information Technology, 6(03), 145–159. https://doi.org/10.1007/s10676-004-3804-4.
Howard, D., & Muntean, I. (2017). Artificial moral cognition: Moral functionalism and autonomous moral agency. In T. M. Powers (Ed.), Philosophy and computing. Springer.
Ikäheimo, H., & Arto, L. (2007). Dimensions of personhood: Editors’ introduction. Journal of Consciousness Studies,14(5–6), 6–16. Retrieved January 20,2021,from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.63.3611&rep=rep1&type=pdf .
Kant, I. (1997). Lectures on ethics. Cambridge University Press.
Kim, T. W., Hooker, J., & Donaldson, T. (2021). Taking principles seriously: A hybrid approach to value alignment in artificial intelligence. Journal of Artificial Intelligence Research, 70, 871–890. https://doi.org/10.1613/jair.1.12481.
Lindemann, G. (2019). Eccentric positionality: On Kant, Plessner, and human dignity. An interview with JM Bernstein. Human Studies, 42(1), 147–158. https://doi.org/10.1007/s10746-018-09486-z.
MacIntyre, A. C. (1999). Dependent rational animals: Why human beings need the virtues. Open Court Publishing.
Marx, K. (1977). Capital: A critique of political economy. Vintage Books.
Misselhorn, C. (2018). Artificial morality, concepts, issues and challenges. Society, 55(2), 161–169. https://doi.org/10.1007/s12115-018-0229-y .
McLuhan, M. (1995). Understanding media: The extensions of man. MIT Press.
Merleau-Ponty. M. (2006). The structure of behavior (8.print.). Duquesne Univ. Press.
Moline, J. N. (1986). Aldo Leopold and the moral community. Environmental Ethics, 8(02), 99–120. https://doi.org/10.5840/enviroethics19868222.
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80.
Naas, M. (2018). Plato and the invention of life. Fordham University Press.
O’Hagan, E. (2009). Animals, agency, and obligation in Kantian ethics. Social Theory and Practice, 35(4), 531–532. https://doi.org/10.5840/soctheorpract200935431.
Rowlands, M. (2009). Animal rights: Moral theory and practice. Palgrave Macmillan.
Salehi, A. A., & Talebzade, H. (2020). Hegel’s metaphysical formulation of labor in phenomenology of spirit and its critique in Marx by introducing praxis as the new foundation. Journal of Philosophical Investigations, 14(30), 142–164. https://doi.org/10.22034/JPIUT.2020.35824.2411 .
Schneider, R. C. (2010). Developing moral sport policies through act utilitarianism based on Bentham’s hedonic calculus. Sport Management International Journal, 6(2), 51–62. https://doi.org/10.4127/ch.2010.0051.
Scott, P. A. (1998). Morally autonomous practice? Advances in Nursing Science, 21(2), 69–79. https://doi.org/10.1097/00012272-199812000-00008.
Searle, J. R., Dennett, D. C., & Chalmers, D. J. (1997). The mystery of consciousness. The New York Review of Books.
Shapiro, P. (2006). Moral agency in other animals. Theoretical Medicine and Bioethics, 27(4), 357–358. https://doi.org/10.1007/s11017-006-9010-0.
Sharkey, A. (2016). Should we welcome robot teachers? Ethics and Information Technology, 18(4), 283–297. https://doi.org/10.1007/s10676-016-9387-z.
Singer, P. (2007). All animals are equal. In Chadwick, R., & Schroeder, D. (Eds). Applied ethics: Critical concepts in philosophy. Routledge-Taylor & Francis Group.
Steiner, P., & Stewart, J. (2009). From autonomy to heteronomy (and back): The enaction of social life. Phenomenology and the Cognitive Sciences, 8(4), 527–529. https://doi.org/10.1007/s11097-009-9139-1.
Torrance, S. (2008). Ethics, consciousness and artificial agents. AI and Society, 22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8.
Waal, F. D. (1996). Good natured: The origins of right and wrong in humans and other animals. Harvard University Press.
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.
Wang, B., Liu, H., & An, P. et al. (2018). Artificial intelligence and education. In Jin, D. (Ed.) Reconstructing our orders. Springer. https://doi.org/10.1007/978-981-13-2209-9_5 .
Warren, T. P. (1986). Respect for nature: A theory of environmental ethics. Princeton University Press.
Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought. MIT Press.
Xu, Z., Wei Y., & Zhang, J. (2021). AI applications in education. In Shi, S., Ye, L., &Zhang, Y. (Eds.) Artificial intelligence for communications and networks. AICON 2020. Lecture notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 356. Springer. https://doi.org/10.1007/978-3-030-69066-3_29 .
Zheng, Y. (2011). Machine learning with incomplete information. PhD dissertation. The University of Nebraska-Lincoln.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sun, F., Ye, R. Moral Considerations of Artificial Intelligence. Sci & Educ 32, 1–17 (2023). https://doi.org/10.1007/s11191-021-00282-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11191-021-00282-3