Skip to main content
Log in

The Essence of Ethical Reasoning in Robot-Emotion Processing

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot’s emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Our focus in the remaining of the paper will be more inclined towards autonomous robots implementing emotion models.

  2. While the definition of what is socially acceptable might vary between cultures, our definition of socially acceptable emotions focuses on the context of interaction between human and a robot as presented in earlier examples.

  3. By saying ethical standards, we mean what a person believes as right or wrong from the ethical standpoint.

  4. According to Appraisal theory, emotions result from the evaluation of the given situation which needs deliberate thinking from the individual [26, 32].

  5. According to Appraisal theory, an event results in triggering of more than one emotions at the same time [26].

  6. While another form of ethical theory called virtue ethics exists [14], it is mostly descriptive in nature and not feasible to be realised in artificial agents like social robots. Therefore, we shall not indulge into the discussion of virtue ethics in this paper.

  7. Although literature suggests that mood and personality play a dynamic role in the process of emotion generation, in this paper, we shall not discuss the relationship of mood and personality with emotion. We have integrated the notion of mood and personality in EEGS and currently investigating the relationship of those factors in the process of emotion generation.

  8. Detailed discussion about how emotions are differentiated with varying values for the degree of their positivity and negativity is out of the scope of this paper. For further discussion on the degrees of valence of different emotions, please refer to [28] and related literature.

  9. While the signed value of Degree was sufficient to specify the Valence as POSITIVE or NEGATIVE, we chose to consider “Valence” as an explicit parameter for the ease of computational mechanism.

  10. The range of \([-\,1, +\,1]\) is a subjective choice. It is completely feasible to select other ranges like \([-\,10, +\,10]\) or \([-\,100, +\,100]\).

  11. We could not find strong evidence on how long the decay time should be considered for an emotion. However, most existing emotion models were found to use the decay time of less then 10 s.

  12. In the examples of previous paragraph, the Source was the robot itself. We have used the notion of Source to allow EEGS to be able to store also the standards about what it believes one person should behave with another person. This kind of design helps EEGS to perform ethical reasoning when two other persons recognised by it interact with each other. This property can be extremely useful in situations of multi-agent interaction.

  13. While the scenarios were designed by the subjects, the emotion generation mechanism was dynamic and determined by the emotion system itself during the interaction.

  14. Dementia is a mental condition in which a person experiences a gradual decrease in the ability to think and remember even the things of normal daily life.

  15. See Table 2 for examples of actions from Rose to Lily.

  16. As mentioned in Sect. 6.1, emotion intensities in EEGS lie in the range [0, 1], where 0 signifies very low intensity and 1 signifies very high intensity.

  17. While we have used the Quantified Emotion as a measure of emotion dynamics in this paper, using only the emotion intensity considering the sign for positive or negative emotions also provided similar results.

  18. It is reasonable to argue that it is not always ethical to have lowered negativity in emotional responses which can occur due to bias of an individual in favour of his/her loved ones. However, in situation of social interaction as in the case of Rose and Lily, it is desirable to have lowered negativity in emotional responses.

References

  1. Alexander L, Moore M (2007) Deontological ethics. Stanford encyclopedia of philosophy. https://stanford.library.sydney.edu.au/entries/ethics-deontological/

  2. Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261

    Article  MATH  Google Scholar 

  3. Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17

    Article  Google Scholar 

  4. Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15

    Google Scholar 

  5. Bartneck C (2003) Interacting with an embodied emotional character. In: Proceedings of the 2003 international conference on designing pleasurable products and interfaces. ACM, pp 55–60

  6. Becker-Asano C (2008) WASABI: affect simulation for agents with believable interactivity, vol 319. IOS Press, Amsterdam

    Google Scholar 

  7. Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum–Comput Stud 59(1):119–155

    Article  Google Scholar 

  8. Callahan S (1988) The role of emotion in ethical decision-making. Hastings Cent Rep 18(3):9–14

    Article  Google Scholar 

  9. Dias J, Mascarenhas S, Paiva A (2014) Fatima modular: towards an agent architecture with a generic appraisal framework. Emotion modeling. Springer, Berlin, pp 44–56

    Google Scholar 

  10. El-Nasr MS, Yen J, Ioerger TR (2000) Flame—fuzzy logic adaptive model of emotions. Auton Agents Multi-agent Syst 3(3):219–257

    Article  Google Scholar 

  11. Gaudine A, Thorne L (2001) Emotion and ethical decision-making in organizations. J Bus Ethics 31(2):175–187

    Article  Google Scholar 

  12. Gebhard P (2005) Alma: a layered model of affect. In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. ACM, pp 29–36

  13. Gratch J, Marsella S (2004) A domain-independent framework for modeling emotion. Cogn Syst Res 5(4):269–306

    Article  Google Scholar 

  14. Hooker J (1996) Three kinds of ethics. Carnegie mellon University. http://public.tepper.cmu.edu/ethics/three.pdf

  15. Isen AM, Means B (1983) The influence of positive affect on decision-making strategy. Soc Cogn 2(1):18–31

    Article  Google Scholar 

  16. Kopp S, Jung B, Lessmann N, Wachsmuth I (2003) Max—a multimodal assistant in virtual reality construction. KI 17(4):11

    Google Scholar 

  17. Lambie JA, Marcel AJ (2002) Consciousness and the varieties of emotion experience: a theoretical framework. Psychol Rev 109(2):219

    Article  Google Scholar 

  18. Le Blanc AD (1999) Graphical user interface to communicate attitude or emotion to a computer program. US Patent 5,977,968

  19. Marinier RP, Laird JE (2007) Computational modeling of mood and feeling from emotion. In: Proceedings of the Cognitive Science Society, vol 29

  20. Marreiros G, Santos R, Ramos C, Neves J (2010) Context-aware emotion-based model for group decision making. IEEE Intell Syst 25(2):31–39

    Article  Google Scholar 

  21. Marsella S, Gratch J, Petta P et al (2010) Computational models of emotion. A blueprint for affective computing—a sourcebook and manual vol 11(1), pp 21–46

  22. Marsella SC, Gratch J (2009) Ema: a process model of appraisal dynamics. Cogn Syst Res 10(1):70–90

    Article  Google Scholar 

  23. Ojha S, Williams MA (2016) Ethically-guided emotional responses for social robots: Should i be angry? In: International conference on social robotics. Springer, Berlin, pp 233–242

  24. Ojha S, Williams MA (2017) A domain-independent approach of cognitive appraisal augmented by higher cognitive layer of ethical reasoning. In: Annual meeting of the Cognitive Science Society

  25. Ojha S, Williams MA (2017) Emotional appraisal: a computational perspective. In: Annual conference on advances in cognitive systems

  26. Ortony A, Clore GL, Collins A (1990) The cognitive structure of emotions. Cambridge University Press, Cambridge

    Google Scholar 

  27. Padgham L, Taylor G (1997) A system for modelling agents having emotion and personality. International Workshop on Intelligent Agent Systems. Springer, Berlin

  28. Plutchik R (1997) The circumplex as a general model of the structure of emotions and personality. In: Plutchik R, Conte H.R (eds) Circumplex models of personality and emotions. American Psychological Association, Washington, DC

  29. Quinton A (1973) Utilitarian ethics. Springer, Berlin

    Book  Google Scholar 

  30. Reilly WN (2006) Modeling what happens between emotional antecedents and emotional consequents. Symposium on agent construction and emotions

  31. Reilly WS (1996) Believable social and emotional agents. Tech. rep., Carnegie-Mellon University, Pittsburgh, PA

  32. Scherer KR (2001) Appraisal considered as a process of multilevel sequential checking. Apprais Process Emot Theory Methods Res 92(120):57

    Google Scholar 

  33. White J (2015) Rethinking machine ethics in the age of ubiquitous technology. IGI Global, Pennsylvania

    Book  Google Scholar 

Download references

Acknowledgements

This research is supported by an Australian Government Research Training Program Scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suman Ojha.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest with any organisation in relation to this research.

Funding

This research was funded by the Research Scholarship provided by the University of Technology Sydney. There is no external funding associated with this research.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ojha, S., Williams, MA. & Johnston, B. The Essence of Ethical Reasoning in Robot-Emotion Processing. Int J of Soc Robotics 10, 211–223 (2018). https://doi.org/10.1007/s12369-017-0459-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-017-0459-y

Keywords

Navigation