As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility (meaning a lack of responsibility) is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use. From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence (AI) technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated, and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans (in a negative, defensive, sense) but also ‘strengthen’ people against the challenges introduced by robotics and AI (in a positive, empowering, manner) [This distinction parallels Berlin’s negative and positive concepts of liberty (Berlin, in Liberty, Oxford University Press, Oxford, 2002)], by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
While the concept of a role is socially constructed, and therefore inherently relational, the substance or content of role responsibility becomes self-contained and isolated once the defining parameters have been established. In other words, assessing the fulfilment of role responsibilities necessitates referral only to the content and scope of the role only, and does not extend to considerations beyond its definitional contours.
Furthermore, would it be possible for an individual human being to be responsible to an object for damaging it, for example to be responsible to a table for scratching its surface? Perhaps more problematically, would it be possible for an object to be responsible to a human being for damage it does to them, for example for a computer to be responsible for deleting someone’s file?
For an array of legal mechanisms and doctrines which exclude responsibility processes, see (Liu 2016).
Under English law at least, there is now the strong, if not quite automatic, presumption that ‘respectable’ pressure groups would have locus standi in representative actions, R v HM Inspectorate of Pollution, ex p Greenpeace (No. 2)  1 WLR 570, and R v Secretary of State for Foreign and Commonwealth Affairs, ex p World Development Movement Ltd  WLR 386. Yet, this does not quite have the effect of overturning the strong individual focus of the rights framework because this jurisprudence empowers pressure groups effectively to act on behalf of aggregated collections of individuals.
Abney, K. (2012). Robotics, ethical theory, and metaethics: A guide for the perplexed. In P. Lin, K. Abney & G. Bekey (Eds.) Robot ethics: The ethical and social implications of robotics. Cambridge: MIT Press.
Alger, J. M., & Alger, S. F. (1997). Beyond mead: Symbolic interaction between humans and felines. Society and Animals, 5, 65–81.
Arendt, H. (1994). Eichmann in Jerusalem: A report on the banality of evil. New York: Penguin.
Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.
Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE, 20–24.
Asaro, P. M. (2016). The liability problem for autonomous artificial agents. In AAAI Spring Symposium Series, 21–23 March 2016 (pp. 190–194), Stanford University. https://www.aaai.org/ocs/index.php/SSS/SSS16/.
Ashrafian, H. (2014). Artificial intelligence and robot responsibilities: Innovating beyond rights. Science and Engineering Ethics, 21, 317–326.
Beck, U. 1992. Risk society: Towards a new modernity. London: SAGE Publications.
Beck, U. (2006). Living in the world risk society: A hobhouse memorial public lecture given on Wednesday 15 February 2006 at the London School of Economics. Economy and Society, 35, 329–345.
Becker, H. S., & Mccall, M. M. 2009. Symbolic interaction and cultural studies. Chicago: University of Chicago Press.
Bekey, G. A. (2005). Autonomous robots: From biological inspiration to implementation and control. Cambridge MA: MIT Press.
Berlin, I. (2002). Liberty. Oxford: Oxford University Press.
Blumer, H. (1986). Symbolic interactionism: Perspective and method. California: University of California Press.
Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9(1). https://www.jetpress.org/volume9/risks.html.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
Bovens, M. (1998). The quest for responsibility: Accountability and citizenship in complex organisations. Cambridge: Cambridge University Press.
Brooks, R. A. (2002). Flesh and machines: How robots will change us. New York: Pantheon Books.
Cane, P. (2002). Responsibility in law and morality. Oxford: Hart Publishing.
Capurro, R., & NAGENBORG, M. (2009). Ethics and robotics, Amsterdam: IOS Press.
Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12, 209–221.
Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behaviour towards robotic objects. In R. Calo, A. M. Froomkin & I. Kerr (Eds.), Robot law. Cheltenham: Edward Elgar Publishing.
Dautenhahn, K. (1998). The art of designing socially iintelligent agents: Science, fiction, and the human in the loop. Applied Artificial Intelligence, 12, 573–617.
David, P. A. (1985). Clio and the economics of QWERTY. The American Economic Review, 75, 332–337.
Dershowitz, A. M. (2009). Rights from wrongs: A secular theory of the origins of rights. New York: Basic Books.
Domanska, E. (2011). Beyond anthropocentrism in historical studies. Historein, 10, 118–130.
Dworkin, R. (2011). Justice for hedgehogs. Cambridge: Harvard University Press.
Elish, M. C. (2016). Moral crumple zones: Cautionary tales in human–robot interaction Paper presented at WeRobot 2016. University of Miami.
Fuller, S. (2011). Humanity 2.0: What it means to be human past, present and future. Basingstoke: Palgrave Macmillan.
Giddens, A. (1999). Risk and responsibility. The Modern Law Review. https://doi.org/10.1111/1468-2230.00188.
Gleick, J. 1997. Chaos: Making a new science. London: Vintage
Harari, Y. N. 2016. Homo deus: A brief history of tomorrow. London: Vintage.
Hart, H. L. A. (2008). Punishment and responsibility: Essays in the philosophy of law. Oxford: Oxford University Press.
Hayles, N. K. (2006). Unfinished work from cyborg to cognisphere. Theory, Culture and Society, 23, 159–166.
Irrgang, B. (2006). Ethical acts in robotics. Ubiquity, 7(34). https://ubiquity.acm.org/article.cfm?id=1164071.
Johnson, S. (2002). Emergence: The connected lives of ants, brains, cities, and software, New York: Simon and Schuster.
Kahn, P. H., Jr., Ishiguro, H., Friedman, B., & Kanda, T. (2006). What is a Human? Toward Psychological Benchmarks in the Field of Human-Robot Interaction. The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006). IEEE.
Karnow, C. E. A. (1996). Liability for distributed artificial intelligences. Berkeley Technology Law Journal, 11,, 147–204.
Kennedy, D. (2005). The dark sides of virtue: Reassessing international humanitarianism. Princeton: Princeton University Press.
Kim, T., & Hinds, P. (2006). “Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction.” The 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2006). IEEE.
Knights, D., & Willmott, H. 1999. Management lives: Power and identity in work organizations. London: SAGE Publications.
Koops, B.-J., Jaquet-Chifelle, D.-O., & Hildebrandt, M. (2010). Bridging the accountability gap: Rights for new entities in the information society? Minnesota Journal of Law Science and Technology, 11, 497–561.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York: Penguin Books.
Lessig, L. (1999). The law of the horse: What cyber law might teach. Harvard Law Review, 113, 501.
Liebowitz, S. J., & Margolis, S. E. (1995). Path dependence, lock-in, and history. Journal of Law, Economics and Organization, 11, 205–226.
Liu, H.-Y. (2015). Law’s impunity: Responsibility and the Modern Private Military Company. Oxford: Hart Publishing.
Liu, H.-Y. (2016). Refining responsibility: differentiating two types of responsibility issues raised by autonomous weapons systems. In N. Bhuta, S. Beck, R. Geiβ, H.-Y. Liu & C. Kreβ (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge: Cambridge University Press.
Marchant, G. E., Allenby, B. R., & Herkert, J. R. (2011). The growing gap between emerging technologies and legal-ethical oversight: The pacing problem, Berlin: Springer.
Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics, 6, 46–51.
Marshall, J. (2014). Human rights law and personal identity. Abingdon: Routledge.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.
Mcnally, P., & Inayatullah, S. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20, 119–136.
Mead, G. H. (1934). Mind, self, and society: From the standpoint of a social behaviorist. Chicago: University Of Chicago Press.
Menzel, P., & D’aluisio, F. 2001. Robo sapiens: Evolution of a new species, Cambridge: MIT Press.
Miller, L. F. (2015). Granting automata human rights: Challenge to a basis of full-rights privilege. Human Rights Review, 16, 369–391.
Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2, 25–42.
O’Brien, J. (2006). The production of reality: Essays and readings on social interaction. Newbury Park: Pine Forge Press.
Pagallo, U. (2013). The laws of robots: Crimes, contracts and torts. Berlin: Springer.
Reason, J. (2000). Human error: Models and management. British Medical Journal, 320, 768–770.
Richardson, K. (2015). An anthropology of robots and AI: Annihilation anxiety and machines. Abingdon: Routledge.
Roden, D. (2014). Posthuman life: Philosophy at the edge of the human. Abingdon: Routledge.
Sanders, C. R. (2003). Actions speak louder than words: Close relationships between humans and nonhuman animals. Symbolic Interaction, 26, 405–426.
Scheff, T. J. (2009). Being mentally Ill: A sociological theory. Piscataway: Transaction Publishers.
Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14, 27–40.
Solum, L. B. (1991). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24, 62–77.
Stone, C. D. (1972). Should trees have standing—toward legal rights for natural objects. Southern California Law Review, 45, 450.
Stryker, S. (1981). Symbolic interactionism: Themes and Variations. In M. Rosenberg, R. H. Turner (Eds.), Social Psychology: Sociological Perspectives (pp. 3–29). New York: Basic Books.
Tamburrini, G. (2009). Robot ethics: A view from the philosophy of science. In R. Capurro, M. Nagenborg (Eds.), Ethics and robotics (pp. 11–22). Heidelberg: IOS Press.
Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22, 495–521.
United Nations General Assembly. (1948). Universal Declaration of Human Rights. UNGAR 217 A (III).
Veitch, S. (2007). Law and irresponsibility: On the legitimation of human suffering. Oxford: Routledge Cavendish.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Walton, M. D. (1985). Negotiation of responsibility: Judgments of blameworthiness in a natural setting. Developmental Psychology, 21, 725.
Yudkowsky, E. (2011). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. M. Cirkovic (Eds.), Global catastrophic risks. Oxford: Oxford University Press.
Zawieska, K. (2015). Deception and manipulation in social robotics. Workshop on The Emerging Policy and Ethic of Human-Robot Interaction at the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI2015). Portland, Oregon, USA.
Zawieska, K., & Stańczyk, A. (2015). Anthropomorphic language in robotics. Workshop Bridging the Gap between HRI and Robot Ethics Research at the 7th International Conference on Social Robotics (ICSR2015).
About this article
Cite this article
Liu, HY., Zawieska, K. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence. Ethics Inf Technol 22, 321–333 (2020). https://doi.org/10.1007/s10676-017-9443-3
- Human rights