Science and Engineering Ethics

, Volume 21, Issue 1, pp 29–40 | Cite as

AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics

Original Paper

Abstract

The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.

Keywords

Artificial intelligence Robotics Philosophy Ethics Humanitarian Human rights 

References

  1. Ashrafian, H., Darzi, A., & Athanasiou, T. (2014). A novel modification of the Turing test for artificial intelligence and robotics in healthcare. The International Journal of Medical Robotics and Computer Assisted Surgery, In press.Google Scholar
  2. Ashrafian, H., Ahmed, K., & Athanasiou, T. (2010). The ethics of animal research. In T. Athanasiou, H. Debas, & A. Darzi (Eds.), Key topics in surgical research and methodology. Berlin: Springer.Google Scholar
  3. Asimov, I. (1950a). The evitable conflict. Astounding Science Fiction, 29(1), 48–68.Google Scholar
  4. Asimov, I. (1950b). The evitable conflict. Astounding Science Fiction, 45(4), 48–68.Google Scholar
  5. Asimov, I. (1985). Robots and empire. New York: Doubleday.Google Scholar
  6. Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly, 53(211), 243–255.CrossRefGoogle Scholar
  7. Dick, P. K. (1968). Do androids dream of electric sheep?. New York: Doubleday.Google Scholar
  8. EPSRC and AHRC (2010). Principles of robotics: Regulating robots in the real world (http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/Pages/principlesofrobotics.aspx).
  9. Finkel, I. (2012). Translation of the text on the Cyrus Cylinder (http://www.britishmuseum.org/explore/highlights/articles/c/cyrus_cylinder_-_translation.aspx) © Trustees of the British Museum.
  10. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Viking (Penguin Group).Google Scholar
  11. Russell, W. M. S., & Burch, R. L. (1959). The principles of humane experimental technique. London: Methuen.Google Scholar
  12. United Nations (1948). The Universal Declaration of Human Rights (UDHR) (http://www.un.org/en/documents/udhr/).
  13. Veruggio, G. (2007). Euron Roboethics Roadmap, Release 1.2 (http://www.roboethics.org/index_file/Roboethics Roadmap Rel.1.2.pdf).
  14. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press (OUP).CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Department of Surgery and Cancer, St Mary’s HospitalImperial College LondonLondonUK

Personalised recommendations