From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

Abstract

As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility (meaning a lack of responsibility) is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use. From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated, and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans (in a negative, defensive, sense) but also ‘strengthen’ people against the challenges introduced by robotics and AI (in a positive, empowering, manner) [This distinction parallels Berlin’s negative and positive concepts of liberty (Berlin, in Liberty, Oxford University Press, Oxford, 2002)], by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    While the concept of a role is socially constructed, and therefore inherently relational, the substance or content of role responsibility becomes self-contained and isolated once the defining parameters have been established. In other words, assessing the fulfilment of role responsibilities necessitates referral only to the content and scope of the role only, and does not extend to considerations beyond its definitional contours.

  2. 2.

    Furthermore, would it be possible for an individual human being to be responsible to an object for damaging it, for example to be responsible to a table for scratching its surface? Perhaps more problematically, would it be possible for an object to be responsible to a human being for damage it does to them, for example for a computer to be responsible for deleting someone’s file?

  3. 3.

    For an array of legal mechanisms and doctrines which exclude responsibility processes, see (Liu 2016).

  4. 4.

    Under English law at least, there is now the strong, if not quite automatic, presumption that ‘respectable’ pressure groups would have locus standi in representative actions, R v HM Inspectorate of Pollution, ex p Greenpeace (No. 2) [1994] 1 WLR 570, and R v Secretary of State for Foreign and Commonwealth Affairs, ex p World Development Movement Ltd [1995] WLR 386. Yet, this does not quite have the effect of overturning the strong individual focus of the rights framework because this jurisprudence empowers pressure groups effectively to act on behalf of aggregated collections of individuals.

References

  1. Abney, K. (2012). Robotics, ethical theory, and metaethics: A guide for the perplexed. In P. Lin, K. Abney & G. Bekey (Eds.) Robot ethics: The ethical and social implications of robotics. Cambridge: MIT Press.

    Google Scholar 

  2. Alger, J. M., & Alger, S. F. (1997). Beyond mead: Symbolic interaction between humans and felines. Society and Animals, 5, 65–81.

    Google Scholar 

  3. Arendt, H. (1994). Eichmann in Jerusalem: A report on the banality of evil. New York: Penguin.

    Google Scholar 

  4. Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.

    Google Scholar 

  5. Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE, 20–24.

  6. Asaro, P. M. (2016). The liability problem for autonomous artificial agents. In AAAI Spring Symposium Series, 21–23 March 2016 (pp. 190–194), Stanford University. https://www.aaai.org/ocs/index.php/SSS/SSS16/.

  7. Ashrafian, H. (2014). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21, 317–326.

    Google Scholar 

  8. Beck, U. 1992. Risk society: Towards a new modernity. London: SAGE Publications.

    Google Scholar 

  9. Beck, U. (2006). Living in the world risk society: A hobhouse memorial public lecture given on Wednesday 15 February 2006 at the London School of Economics. Economy and Society, 35, 329–345.

    Google Scholar 

  10. Becker, H. S., & Mccall, M. M. 2009. Symbolic interaction and cultural studies. Chicago: University of Chicago Press.

    Google Scholar 

  11. Bekey, G. A. (2005). Autonomous robots: From biological inspiration to implementation and control. Cambridge MA: MIT Press.

    Google Scholar 

  12. Berlin, I. (2002). Liberty. Oxford: Oxford University Press.

    Google Scholar 

  13. Blumer, H. (1986). Symbolic interactionism: Perspective and method. California: University of California Press.

    Google Scholar 

  14. Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9(1). https://www.jetpress.org/volume9/risks.html.

  15. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

    Google Scholar 

  16. Bovens, M. (1998). The quest for responsibility: Accountability and citizenship in complex organisations. Cambridge: Cambridge University Press.

    Google Scholar 

  17. Brooks, R. A. (2002). Flesh and machines: How robots will change us. New York: Pantheon Books.

    Google Scholar 

  18. Cane, P. (2002). Responsibility in law and morality. Oxford: Hart Publishing.

    Google Scholar 

  19. Capurro, R., & NAGENBORG, M. (2009). Ethics and robotics, Amsterdam: IOS Press.

    Google Scholar 

  20. Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12, 209–221.

    Google Scholar 

  21. Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behaviour towards robotic objects. In R. Calo, A. M. Froomkin & I. Kerr (Eds.), Robot law. Cheltenham: Edward Elgar Publishing.

    Google Scholar 

  22. Dautenhahn, K. (1998). The art of designing socially iintelligent agents: Science, fiction, and the human in the loop. Applied Artificial Intelligence, 12, 573–617.

    Google Scholar 

  23. David, P. A. (1985). Clio and the economics of QWERTY. The American Economic Review, 75, 332–337.

    Google Scholar 

  24. Dershowitz, A. M. (2009). Rights from wrongs: A secular theory of the origins of rights. New York: Basic Books.

    Google Scholar 

  25. Domanska, E. (2011). Beyond anthropocentrism in historical studies. Historein, 10, 118–130.

    Google Scholar 

  26. Dworkin, R. (2011). Justice for hedgehogs. Cambridge: Harvard University Press.

    Google Scholar 

  27. Elish, M. C. (2016). Moral crumple zones: Cautionary tales in human–robot interaction Paper presented at WeRobot 2016. University of Miami.

  28. Fuller, S. (2011). Humanity 2.0: What it means to be human past, present and future. Basingstoke: Palgrave Macmillan.

    Google Scholar 

  29. Giddens, A. (1999). Risk and responsibility. The Modern Law Review. https://doi.org/10.1111/1468-2230.00188.

    Article  Google Scholar 

  30. Gleick, J. 1997. Chaos: Making a new science. London: Vintage

    MATH  Google Scholar 

  31. Harari, Y. N. 2016. Homo deus: A brief history of tomorrow. London: Vintage.

    Google Scholar 

  32. Hart, H. L. A. (2008). Punishment and responsibility: Essays in the philosophy of law. Oxford: Oxford University Press.

    Google Scholar 

  33. Hayles, N. K. (2006). Unfinished work from cyborg to cognisphere. Theory, Culture and Society, 23, 159–166.

    Google Scholar 

  34. Irrgang, B. (2006). Ethical acts in robotics. Ubiquity, 7(34). https://ubiquity.acm.org/article.cfm?id=1164071.

  35. Johnson, S. (2002). Emergence: The connected lives of ants, brains, cities, and software, New York: Simon and Schuster.

    Google Scholar 

  36. Kahn, P. H., Jr., Ishiguro, H., Friedman, B., & Kanda, T. (2006). What is a Human? Toward Psychological Benchmarks in the Field of Human-Robot Interaction. The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006). IEEE.

  37. Karnow, C. E. A. (1996). Liability for distributed artificial intelligences. Berkeley Technology Law Journal, 11,, 147–204.

    Google Scholar 

  38. Kennedy, D. (2005). The dark sides of virtue: Reassessing international humanitarianism. Princeton: Princeton University Press.

    Google Scholar 

  39. Kim, T., & Hinds, P. (2006). “Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction.” The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006). IEEE.

  40. Knights, D., & Willmott, H. 1999. Management lives: Power and identity in work organizations. London: SAGE Publications.

    Google Scholar 

  41. Koops, B.-J., Jaquet-Chifelle, D.-O., & Hildebrandt, M. (2010). Bridging the accountability gap: Rights for new entities in the information society? Minnesota Journal of Law Science and Technology, 11, 497–561.

    Google Scholar 

  42. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Penguin Books.

    Google Scholar 

  43. Lessig, L. (1999). The law of the horse: What cyber law might teach. Harvard Law Review, 113, 501.

    Google Scholar 

  44. Liebowitz, S. J., & Margolis, S. E. (1995). Path dependence, lock-in, and history. Journal of Law, Economics and Organization, 11, 205–226.

    Google Scholar 

  45. Liu, H.-Y. (2015). Law’s impunity: Responsibility and the Modern Private Military Company. Oxford: Hart Publishing.

    Google Scholar 

  46. Liu, H.-Y. (2016). Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapons systems. In N. Bhuta, S. Beck, R. Geiβ, H.-Y. Liu & C. Kreβ (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge: Cambridge University Press.

    Google Scholar 

  47. Marchant, G. E., Allenby, B. R., & Herkert, J. R. (2011). The growing gap between emerging technologies and legal-ethical oversight: The pacing problem, Berlin: Springer.

    Google Scholar 

  48. Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics, 6, 46–51.

    Google Scholar 

  49. Marshall, J. (2014). Human rights law and personal identity. Abingdon: Routledge.

    Google Scholar 

  50. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

    Google Scholar 

  51. Mcnally, P., & Inayatullah, S. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20, 119–136.

    Google Scholar 

  52. Mead, G. H. (1934). Mind, self, and society: From the standpoint of a social behaviorist. Chicago: University Of Chicago Press.

    Google Scholar 

  53. Menzel, P., & D’aluisio, F. 2001. Robo sapiens: Evolution of a new species, Cambridge: MIT Press.

    Google Scholar 

  54. Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review, 16, 369–391.

    Google Scholar 

  55. Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2, 25–42.

    Google Scholar 

  56. O’Brien, J. (2006). The production of reality: Essays and readings on social interaction. Newbury Park: Pine Forge Press.

    Google Scholar 

  57. Pagallo, U. (2013). The laws of robots: Crimes, contracts and torts. Berlin: Springer.

    Google Scholar 

  58. Reason, J. (2000). Human error: Models and management. British Medical Journal, 320, 768–770.

    Google Scholar 

  59. Richardson, K. (2015). An anthropology of robots and AI: Annihilation anxiety and machines. Abingdon: Routledge.

    Google Scholar 

  60. Roden, D. (2014). Posthuman life: Philosophy at the edge of the human. Abingdon: Routledge.

    Google Scholar 

  61. Sanders, C. R. (2003). Actions speak louder than words: Close relationships between humans and nonhuman animals. Symbolic Interaction, 26, 405–426.

    Google Scholar 

  62. Scheff, T. J. (2009). Being mentally Ill: A sociological theory. Piscataway: Transaction Publishers.

    Google Scholar 

  63. Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14, 27–40.

    Google Scholar 

  64. Solum, L. B. (1991). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231.

    Google Scholar 

  65. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24, 62–77.

    Google Scholar 

  66. Stone, C. D. (1972). Should trees have standing—toward legal rights for natural objects. Southern California Law Review, 45, 450.

    Google Scholar 

  67. Stryker, S. (1981). Symbolic interactionism: Themes and Variations. In M. Rosenberg, R. H. Turner (Eds.), Social Psychology: Sociological Perspectives (pp. 3–29). New York: Basic Books.

  68. Tamburrini, G. (2009). Robot ethics: A view from the philosophy of science. In R. Capurro, M. Nagenborg (Eds.), Ethics and robotics (pp. 11–22). Heidelberg: IOS Press.

    Google Scholar 

  69. Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22, 495–521.

    Google Scholar 

  70. United Nations General Assembly. (1948). Universal Declaration of Human Rights. UNGAR 217 A (III).

  71. Veitch, S. (2007). Law and irresponsibility: On the legitimation of human suffering. Oxford: Routledge Cavendish.

    Google Scholar 

  72. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  73. Walton, M. D. (1985). Negotiation of responsibility: Judgments of blameworthiness in a natural setting. Developmental Psychology, 21, 725.

    Google Scholar 

  74. Yudkowsky, E. (2011). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global catastrophic risks. Oxford: Oxford University Press.

    Google Scholar 

  75. Zawieska, K. (2015). Deception and manipulation in social robotics. Workshop on The Emerging Policy and Ethic of Human-Robot Interaction at the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI2015). Portland, Oregon, USA.

  76. Zawieska, K., & Stańczyk, A. (2015). Anthropomorphic language in robotics. Workshop Bridging the Gap between HRI and Robot Ethics Research at the 7th International Conference on Social Robotics (ICSR2015).

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Hin-Yan Liu.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, HY., Zawieska, K. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence. Ethics Inf Technol 22, 321–333 (2020). https://doi.org/10.1007/s10676-017-9443-3

Download citation

Keywords

  • Responsibility
  • Robotics
  • AI
  • Human rights
  • Anthropocentrism
  • Man-under-the-loop