Abstract
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.
Similar content being viewed by others
References
Ashrafian, H., Darzi, A., & Athanasiou, T. (2014). A novel modification of the Turing test for artificial intelligence and robotics in healthcare. The International Journal of Medical Robotics and Computer Assisted Surgery, In press.
Ashrafian, H., Ahmed, K., & Athanasiou, T. (2010). The ethics of animal research. In T. Athanasiou, H. Debas, & A. Darzi (Eds.), Key topics in surgical research and methodology. Berlin: Springer.
Asimov, I. (1950a). The evitable conflict. Astounding Science Fiction, 29(1), 48–68.
Asimov, I. (1950b). The evitable conflict. Astounding Science Fiction, 45(4), 48–68.
Asimov, I. (1985). Robots and empire. New York: Doubleday.
Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53(211), 243–255.
Dick, P. K. (1968). Do androids dream of electric sheep?. New York: Doubleday.
EPSRC and AHRC (2010). Principles of robotics: Regulating robots in the real world (http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx).
Finkel, I. (2012). Translation of the text on the Cyrus Cylinder (http://www.britishmuseum.org/explore/highlights/articles/c/cyrus_cylinder_-_translation.aspx) © Trustees of the British Museum.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking (Penguin Group).
Russell, W. M. S., & Burch, R. L. (1959). The principles of humane experimental technique. London: Methuen.
United Nations (1948). The Universal Declaration of Human Rights (UDHR) (http://www.un.org/en/documents/udhr/).
Veruggio, G. (2007). Euron Roboethics Roadmap, Release 1.2 (http://www.roboethics.org/index_file/Roboethics Roadmap Rel.1.2.pdf).
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press (OUP).
Conflict of interest
None.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ashrafian, H. AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Sci Eng Ethics 21, 29–40 (2015). https://doi.org/10.1007/s11948-013-9513-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-013-9513-9