Promoting Constructive Interaction and Moral Behaviors Using Adaptive Empathetic Learning

  • Jize Chen
  • Yanning Zuo
  • Dali Zhang
  • Zhenshen Qu
  • Changhong WangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11740)


Moral system assists people with constructive interaction by maximizing the inner stimulus transfered from outer feelings. For this reason, building an intrinsic sense of morality is one potential way of regulating agents’ behaviors. Incorporating ideas found in social neuroscience, we hardwired a theoretical model of empathy in rational reinforcement learning-based agents to enable affective state sharing between agents. Our learning algorithm accounts for the impact of social comparison and companion impression, which play an important role on the update of empathy and make it possible for agents to change between cooperation and competition adaptively. Empathetic learners’ behavioral dynamics were tested and analyzed in multiple game settings. In iterated prisoner dilemma, empathetic agents showed increased cooperation in most cases except exhibiting self-protection awareness vigilantly when their partners were in the antagonistic state. Empathetic agents also showed a strong sense of fairness in the ultimatum game which resulted in an evenhanded allocation scheme on resources.


Empathy Constructive interaction Multi-agent system Reinforcement learning 


  1. 1.
    Axelrod, R., et al.: The evolution of strategies in the iterated prisoner’s dilemma. In: The Dynamics of Norms, pp. 1–16 (1987)Google Scholar
  2. 2.
    Bekoff, M.: The moral lives of animals. Libr. J. 136(5), 129 (2012)Google Scholar
  3. 3.
    Bosse, T., Duell, R., Memon, Z.A., Treur, J., Wal, C.N.V.D.: Agent-based modeling of emotion contagion in groups. Cogn. Comput. 7(1), 111–136 (2015)CrossRefGoogle Scholar
  4. 4.
    Brosnan, S.F., de Waal, F.B.: Evolution of responses to (un)fairness. Science 346(6207), 1251776 (2014)CrossRefGoogle Scholar
  5. 5.
    Conitzer, V., Sinnottarmstrong, W., Borg, J.S., Deng, Y., Kramer, M.: Moral decision making frameworks for artificial intelligence (2017)Google Scholar
  6. 6.
    Cushman, F., Kumar, V., Railton, P.: Moral learning: current and future directions. Cognition (2017) Google Scholar
  7. 7.
    De Waal, F.B.M.: Putting the altruism back into altruism: the evolution of empathy. Ann. Rev. Psychol. 59(1), 279 (2008)CrossRefGoogle Scholar
  8. 8.
    Debove, S., Baumard, N., Andre, J.B.: Models of the evolution of fairness in the ultimatum game: a review and classification. Evol. Hum. Behav. 37(3), 245–254 (2016)CrossRefGoogle Scholar
  9. 9.
    Goldsmith, J., Burton, E.: Why teaching ethics to AI practitioners is important. In: AAAI, pp. 4836–4840 (2017)Google Scholar
  10. 10.
    Jong, S.D., Verbeeck, K., Verbeeck, K.: Artificial agents learning human fairness. In: International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 863–870 (2008)Google Scholar
  11. 11.
    Martinezvaquero, L.A., Han, T.A., Pereira, L.M., Lenaerts, T.: Apology and forgiveness evolve to resolve failures in cooperative agreements. In: Benelux Conference on Artificial Intelligence, p. 10639 (2015)Google Scholar
  12. 12.
    Mei, S., Marsella, S.C., Pynadath, D.V.: Modeling appraisal in theory of mind reasoning. Auton. Agents Multi-Agent Syst. 20(1), 14–31 (2010)CrossRefGoogle Scholar
  13. 13.
    Moerland, T.M., Broekens, J., Jonker, C.M.: Emotion in reinforcement learning agents and robots: a survey. Mach. Learn. 5, 1–38 (2017)zbMATHGoogle Scholar
  14. 14.
    Moniz Pereira, L., Lenaerts, T., Martinez-Vaquero, L.A., et al.: Social manifestation of guilt leads to stable cooperation in multi-agent systems. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 1422–1430. International Foundation for Autonomous Agents and Multiagent Systems (2017)Google Scholar
  15. 15.
    Parkes, D.C., Wellman, M.P.: Economic reasoning and artificial intelligence. Science 349(6245), 267 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Pereira, L.M., Santos, F.C., Lenaerts, T., et al.: Why is it so hard to say sorry? Evolution of apology with commitments in the iterated prisoner’s dilemma. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 177–183. AAAI Press (2013)Google Scholar
  17. 17.
    Potapov, A., Rodionov, S.: Universal empathy and ethical bias for artificial general intelligence. J. Exp. Theor. Artif. Intell. 26(3), 405–416 (2013)CrossRefGoogle Scholar
  18. 18.
    Ventura, R.: Emotions and empathy: a bridge between nature and society? Int. J. Mach. Conscious. 2(02), 343–361 (2010)CrossRefGoogle Scholar
  19. 19.
    de Waal, F.B., Preston, S.D.: Mammalian empathy: behavioural manifestations and neural basis. Nat. Rev. Neurosci. 18(8), 498–509 (2017)CrossRefGoogle Scholar
  20. 20.
    Van’t Wout, M., Kahn, R.S., Sanfey, A.G., Aleman, A.: Affective state and decision-making in the ultimatum game. Exp. Brain Res. 169(4), 564–568 (2006)CrossRefGoogle Scholar
  21. 21.
    Yu, C., Zhang, M., Ren, F.: Emotional multiagent reinforcement learning in social dilemmas. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 3083–3096 (2015)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jize Chen
    • 1
  • Yanning Zuo
    • 2
  • Dali Zhang
    • 1
  • Zhenshen Qu
    • 1
  • Changhong Wang
    • 1
    Email author
  1. 1.Space Science and Inertial Technology Research CenterHarbin Institute of TechnologyHarbinPeople’s Republic of China
  2. 2.Department of Biological Chemistry and Department of NeurobiologyUniversity of CaliforniaLos AngelesUSA

Personalised recommendations