Skip to main content
Log in

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. The Confucianism discussed in this paper is mainly focused on “classical Confucianism,” or “early Confucianism,” or “pre-Qin Confucianism.” In particular, this paper discusses Confucian ethics developed by early Confucian scholars before the creation of the Qin dynasty (221–206 BCE) represented by Confucius (551–479 BCE) and Mencius (372–289 BCE). When discussing the Confucian scholarship on blame and remonstration, we also included the work of Wang Fuzhi (1619–1692) who was a prominent Confucian scholar during the late Ming dynasty (1368–1644).

  2. We note that robot persuasion is of course not always beneficial; teachers have raised a number of concerns regarding the persuasive capabilities of robots (Serholt et al. 2017).

  3. These are the first two Confucian robot ethics principles. See Liu (2017) for detailed discussion of the three Confucian robot ethics principles.

References

  • Ames, R. T. (2011). Confucian role ethics: A vocabulary. Hong Kong: The Chinese University of Hong Kong Press.

    Book  Google Scholar 

  • Ames, R. T. (2016). Theorizing "person" in Confucian ethics: A good place to start. Sungkyun Journal of East Asian Studies, 16(2), 141–162.

    Article  Google Scholar 

  • Bartneck, C., Bleeker, T., Bun, J., Fens, P., & Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn, Journal of Behavioral Robotics, 1(2), 109–115.

    Article  Google Scholar 

  • Bell, D., & Metz, T. (2011). Confucianism and Ubuntu: Reflections on a dialogue between Chinese and African traditions. Journal of Chinese Philosophy, 38(s1), 78–95.

    Article  Google Scholar 

  • Brandstetter, J., Beckner, C., Sandoval, E. B., & Bartneck, C. (2017, March). Persistent lexical entrainment in HRI. Paper presented at the 2017 ACM/IEEE international conference on human–robot interaction, Vienna, Austria. Retrieved from https://dl.acm.org/citation.cfm?id=3020257

  • Brindley, E. (2009). "Why use an ox-cleaver to carve a chicken?" The sociology of the ideal in the Lunyu. Philosophy East & West, 59(1), 47–70.

    Article  Google Scholar 

  • Briggs, G., & Scheutz, M. (2014). How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics, 6(3), 343–355.

    Article  Google Scholar 

  • Brown, P., & Levinson, S. (1987). Politeness: Some universals in language usage. Cambridge, UK: University Press.

    Book  Google Scholar 

  • Burgoon, M., Hunsaker, F. G., & Dawson, E. J. (1994). Human communication (3rd ed.). Thousand Oaks, CA: Sage Publications.

    Google Scholar 

  • Confucius. (2000). Confucius Analects: With selection from traditional commentaries (trans by Slingerland, E.). Indianapolis, IN: Hackett.

  • Cormier, D., Newman, G., Nakane, M., Young, J. E., & Durocher, S. (2013, August). Would you do as a robot commands? An obedience study for human–robot interaction. Paper presented at the First international conference on human–agent interaction, Sapporo, Japan. Retrieved from https://hci.cs.umanitoba.ca/assets/publication_files/2013-would-you-do-as-a-robot-commands.pdf

  • Devin, S., & Alami, R. (2016, March). An implemented theory of mind to improve human–robot shared plans execution. Paper presented at the 11th ACM/IEEE international conference on human robot interaction, Christchurch, New Zealand. Retrieved from https://ieeexplore.ieee.org/abstract/document/7451768/

  • Dumouchel, P., & Damiano, L. (2017). Living with robots (trans, DeBevoise, M.) Cambridge, MA: Harvard University Press.

  • Feshbach, N. D. (1987). Parental empathy and child adjustment/maladjustment. In N. Eisenberg & J. Strayer (Eds.), Cambridge studies in social and emotional development. Empathy and its development (pp. 271–291). New York, NY: Cambridge University Press.

    Google Scholar 

  • Goette, L., Huffman, D., & Meier, S. (2006). The impact of group membership on cooperation and norm enforcement: Evidence using random assignment to real social groups. American Economic Review, 96(2), 212–216.

    Article  Google Scholar 

  • Häring, M., Kuchenbrandt, D., & André, E. (2014, March). Would you like to play with me? How robots' group membership and task features influence human–robot interaction. Paper presented at the 2014 ACM/IEEE international conference on human–robot interaction, Bielefeld, Germany. Retrieved from https://dl.acm.org/citation.cfm?id=2559673

  • Hiatt, L. M., Harrison, A. M., & Trafton, G. J. (2011, July). Accommodating human variability in human–robot teams through theory of mind. Paper presented at the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain. Retrieved from https://dl.acm.org/citation.cfm?id=2283745

  • Huang, Y. (2007). Is Wang Yangming’s notion of innate moral knowledge (liangzhi) tenable? In V. Shen & K.-L. Shun (Eds.), Confucian ethics in retrospect and prospect (pp. 149–170). Washington, DC: The Council for Research in Values and Philosophy.

    Google Scholar 

  • Iio, T., Shiomi, M., Shinozawa, K., Miyashita, T., Akimoto, T., & Hagita, N. (2009, October). Lexical entrainment in human–robot interaction: can robots entrain human vocabulary? Paper presented at the 2009 IEEE/RSJ international conference on intelligent robots and systems, St. Louis, MO. Retrieved from https://ieeexplore.ieee.org/abstract/document/5354149

  • Isaac, A. M., & Bridewell, W. (2017). White lies on silver tongues: Why robots need to receive (and how). In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 155–172). New York, NY: Oxford University Press.

    Google Scholar 

  • Jackson, R. B., & Williams, T. (2018, August). Robot: Asker of questions and changer of norms? Paper presented at the international conference on robot ethics and standards, Troy, NY. Retrieved from https://inside.mines.edu/~twilliams/pdfs/jackson2018icres.pdf

  • Jackson, R. B., & Williams, T. (2019a, March). Language-capable robots may inadvertently weaken human moral norms. Paper presented at the 14th ACM/IEEE international conference on human–robot interaction, Daegu, South Korea. Retrieved from https://inside.mines.edu/~twilliams/pdfs/jackson2019althri.pdf

  • Jackson, R. B., & Williams, T. (2019b, March). On perceived social and moral agency in natural language capable robots. Paper presented at the HRI workshop on the dark side of human–robot interaction: Ethical considerations and community guidelines for the field of HRI, Daegu, South Korea.

  • Jackson, R. B., Wen, R., & Williams, T. (2019, January). Tact in noncompliance: The need for pragmatically apt responses to unethical commands. Paper presented at the AAAI/ACM conference on artificial intelligence, ethics, and society, Honolulu, HI. Retrieved from https://inside.mines.edu/~twilliams/pdfs/jackson2019aies.pdf

  • Jehn, K. A. (1997). A qualitative analysis of conflict types and dimensions in organizational groups. Administrative Science Quarterly, 42(3), 530–557.

    Article  Google Scholar 

  • Jung, M. F., Martelaro, N., & Hinds, P. J. (2015, March). Using robots to moderate team conflict: The case of repairing violations. Paper presented at the 10th Annual ACM/IEEE international conference on human–robot interaction, Portland, OR. Retrieved from https://dl.acm.org/citation.cfm?id=2696460

  • Kadar, D., & Marquez-Reiter, R. (2015). (Im)politeness and (im)morality: Insights from intervention. Journal of Politeness Research Language Behaviour Culture, 11(2), 239–260.

    Google Scholar 

  • Kahn, P., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., et al. (2012, March). Do people hold a humanoid robot morally accountable for the harm it causes? Paper presented at the 7th ACM/IEEE international conference on human–robot interaction, Boston, MA. Retrieved from https://ieeexplore.ieee.org/abstract/document/6249577/

  • Kennedy, J., Baxter, P., & Belpaeme, T. (2014, March). Children comply with a robot's indirect requests. Paper presented at the 2014 ACM/IEEE international conference on human–robot interaction, Bielefeld, Germany. Retrieved from https://dl.acm.org/citation.cfm?id=2559636.2559820

  • Korsgaard, C. M. (1993). The reasons we can share: An attack on the distinction between agent-relative and agent neutral values. Social Philosophy and Policy, 10(1), 24–51.

    Article  Google Scholar 

  • Kronrod, A., Grinstein, A., & Wathieu, L. (2012). Go green! Should environmental messages be so assertive? Journal of Marketing, 76(1), 95–102.

    Article  Google Scholar 

  • Lakoff, R. T., & Ide, S. (Eds.). (2005). Broadening the horizon of linguistic politeness. Amsterdam, Netherlands: John Benjamins Publishing.

    Google Scholar 

  • Lambert, A. (2017). Impartiality, close friendship and the Confucian tradition. In C. Risseeuw & M. van Raalte (Eds.), Conceptualizing friendship in time and place (pp. 205–228). Amsterdam, Netherlands: Brill.

    Chapter  Google Scholar 

  • Lee, N., Kim, J., Kim, E., & Kwon, O. (2017). The influence of politeness behavior on user compliance with social robots in a healthcare service setting. International Journal of Social Robotics, 9(5), 727–743.

    Article  Google Scholar 

  • Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., & Paiva, A. (2013). The influence of empathy in human–robot relations. International Journal of Human-Computer Studies, 71(3), 250–260.

    Article  Google Scholar 

  • Liu, J. (2017, October). Confucian robotic ethics. Paper presented at the international conference on the relevance of the classics under the conditions of modernity: humanity and science. Hong Kong: The Hong Kong Polytechnic University.

  • Lopez, A., Ccasane, B., Paredes, R., & Cuellar, F. (2017, March). Effects of using indirect language by a robot to change human attitudes. Paper presented at the 2017 ACM/IEEE international conference on human–robot interaction, Vienne, Austria. Retrieved from https://dl.acm.org/citation.cfm?id=3029798.3038310

  • Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. Paper presented at the ACM/IEEE international conference on human–robot interaction, Portland, OR. Retrieved from https://dl.acm.org/citation.cfm?id=2696458

  • Midden, C., & Ham, J. (2012). The illusion of agency: The influence of the agency of an artificial agent on its persuasive power. In M. Bang & E. L. Ragnemalm (Eds.), Persuasive technology design for health and safety: Proceedings of the 7th international conference on persuasive technology (pp. 90–99). Heidelberg, Germany: Springer.

    Chapter  Google Scholar 

  • Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita, N. (2009, March). Footing in human–robot conversations: How robots might shape participant roles using gaze cues. Paper presented at the 4th ACM/IEEE international conference on human–robot interaction (HRI), La Jolla, CA. Retrieved from https://ieeexplore.ieee.org/document/6256095/

  • Nagai, Y., Hosoda, K., Morita, A., & Asada, M. (2003). A constructive model for the development of joint attention. Connection Science, 15(4), 211–229.

    Article  Google Scholar 

  • Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. Paper presented at the SIGCHI conference on human factors in computing systems, Boston, MA. Retrieved from https://dl.acm.org/citation.cfm?doid=191666.191703

  • Nuyen, A. T. (2007). Confucian ethics as role-based ethics. International Philosophical Quarterly, 47(3), 315–328.

    Article  Google Scholar 

  • Pereira, A., Leite, A., Mascarenhas, S., Martinho, C., & Paiva, A. (2010, May). Using empathy to improve human–robot relationships. Paper presented at the 9th international conference on autonomous agents and multiagent systems, Toronto, Canada. Retrieved from https://dl.acm.org/citation.cfm?id=1838194

  • Puett, M., & Gross-Loh, C. (2016). The path: What Chinese philosophers can teach us about the good life. New York, NY: Simon & Schuster Inc.

    Google Scholar 

  • Randall, T. E. (2019). Justifying partiality in care ethics. Res Public. https://doi.org/10.1007/s11158-019-09416-5.

    Article  Google Scholar 

  • Rea, D. J., Geiskkovitch, D., & Young, J. E. (2017, March). Wizard of awwws: Exploring psychological impact on the researchers in social HRI experiments. Paper presented at the 2017 ACM/IEEE international conference on human–robot interaction, Vienna, Austria. Retrieved from https://dl.acm.org/citation.cfm?id=3034782

  • Rosemont, H., & Ames, R. T. (2016). Confucian role ethics: A moral vision for the 21st century?. Taipei, Taiwan: National Taiwan University Press.

    Book  Google Scholar 

  • Scassellati, B. (2002). Theory of mind for a humanoid robot. Autonomous Robots, 12(1), 13–24.

    Article  Google Scholar 

  • Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In P. Lin, K. Abney & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 205–222). Cambridge, MA: The MIT Press.

    Google Scholar 

  • Seddon, K. H. (n.d.). Epictetus. International encyclopedia of philosophy. https://www.iep.utm.edu/epictetu/. Accessed 12 April 2019.

  • Seok, B. (2013). Embodied moral psychology and Confucian philosophy. Lanham, MD: Lexington Books.

    Google Scholar 

  • Serholt, S., Barendregt, W., Vasalou, A., Alves-Oliveira, P., Jones, A., Petisca, S., et al. (2017). The case of classroom robots: teachers' deliberations on the ethical tensions. AI & Society, 32(4), 613–631.

    Article  Google Scholar 

  • Shapiro, S. L., Carlson, L. E., Astin, J. A., & Freedman, B. (2006). Mechanisms of mindfulness. Journal of Clinical Psychology, 62(3), 373–386.

    Article  Google Scholar 

  • Shen, S., Slovak, P., & Jung, M. F. (2018, March). “Stop. I see a conflict happening.”: A robot mediator for young children's interpersonal conflict resolution. Paper presented at the 2018 ACM/IEEE international conference on human–robot interaction, Chicago, IL. Retrieved from https://dl.acm.org/citation.cfm?id=3171248

  • Siegel, M., Breazeal, C., & Norton, M. I. (2009, October). Persuasive robotics: The influence of robot gender on human behavior. Paper presented at 2009 IEEE/RSJ international conference on intelligent robots and systems, St. Louis, MO. Retrieved from https://ieeexplore.ieee.org/document/5354116

  • Simmons, R., Makatchev, M., Kirby, R., Lee, M. K., et al. (2011). Believable robot characters. AI Magazine, 32(4), 39–52.

    Article  Google Scholar 

  • Straub, I. (2016). “It looks like a human!” the interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription. AI & Society, 31(4), 553–571.

    Article  Google Scholar 

  • Tapus, A., & Mataric, M. (2007, March). Emulating empathy in socially assistive robotics. Paper presented at the AAAI spring symposium on multidisciplinary collaboration for socially assistive robotics, Palo Alto, CA. Retrieved from https://robotics.usc.edu/publications/media/uploads/pubs/533.pdf

  • Tsuzuki, M., Miyamoto, S., & Zhang, Q. (1999). Politeness degree of imperative and question request expressions: Japanese, English, Chinese. Paper presented at the 6th international colloquium on cognitive science, Tokyo, Japan.

  • Vollmer, A.-L., Rohlfing, K. J., Wrede, B., & Cangelosi, A. (2015). Alignment to the actions of a robot. International Journal of Social Robotics, 7(2), 241–252.

    Article  Google Scholar 

  • Vollmer, A.-L., Wrede, B., Rohlfing, K. J., & Cangelosi, A. (2013, August). Do beliefs about a robot's capabilities influence alignment to its actions? Paper presented at the IEEE 3rd joint international conference on development and learning and epigenetic robotics (ICDL), Osaka, Japan. Retrieved from https://ieeexplore.ieee.org/document/6652521

  • Williams, T., Acharya, S., Schreitter, S., & Scheutz, M. (2016, March). Situated open world reference resolution for human–robot dialogue. Paper presented at the 11th ACM/IEEE international conference on human–robot interaction, Christchurch, New Zealand. Retrieved from https://ieeexplore.ieee.org/document/7451767

  • Williams, T., Jackson, R. B., & Lockshin, J. (2018, July). A Bayesian analysis of moral norm malleability during clarification dialogues. Paper presented at the 40th Annual Meeting of the Cognitive Science Society, Madison, WI. Retrieved from https://inside.mines.edu/~twilliams/pdfs/williams2018cogsci.pdf

  • Williams, T., & Scheutz, M. (2019). Reference in robotics: A givenness hierarchy theoretic approach. In J. Gundel & B. Abbott (Eds.), Oxford handbook of reference. Oxford, UK: Oxford University Press.

    Google Scholar 

  • Wong, D. B. (2014). Cultivating the self in concert with others. In A. Olberding (Ed.), Dao companion to the Analects (pp. 171–198). Dordrecht, Netherlands: Springer.

    Chapter  Google Scholar 

  • Wong, P.-H. (2012). Dao, harmony and personhood: Towards a Confucian ethics of technology. Philosophy and Technology, 25(1), 67–86.

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded in part by National Science Foundation grant IIS-1909847.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qin Zhu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, Q., Williams, T., Jackson, B. et al. Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective. Sci Eng Ethics 26, 2511–2526 (2020). https://doi.org/10.1007/s11948-020-00246-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-020-00246-w

Keywords

Navigation