Abstract
As social robots become more and more intelligent and autonomous in operation, it is extremely important to ensure that such robots act in socially acceptable manner. More specifically, if such an autonomous robot is capable of generating and expressing emotions of its own, it should also have an ability to reason if it is ethical to exhibit a particular emotional state in response to a surrounding event. Most existing computational models of emotion for social robots have focused on achieving a certain level of believability of the emotions expressed. We argue that believability of a robot’s emotions, although crucially necessary, is not a sufficient quality to elicit socially acceptable emotions. Thus, we stress on the need of higher level of cognition in emotion processing mechanism which empowers social robots with an ability to decide if it is socially appropriate to express a particular emotion in a given context or it is better to inhibit such an experience. In this paper, we present the detailed mathematical explanation of the ethical reasoning mechanism in our computational model, EEGS, that helps a social robot to reach to the most socially acceptable emotional state when more than one emotions are elicited by an event. Experimental results show that ethical reasoning in EEGS helps in the generation of believable as well as socially acceptable emotions.
Similar content being viewed by others
Notes
Our focus in the remaining of the paper will be more inclined towards autonomous robots implementing emotion models.
While the definition of what is socially acceptable might vary between cultures, our definition of socially acceptable emotions focuses on the context of interaction between human and a robot as presented in earlier examples.
By saying ethical standards, we mean what a person believes as right or wrong from the ethical standpoint.
According to Appraisal theory, an event results in triggering of more than one emotions at the same time [26].
While another form of ethical theory called virtue ethics exists [14], it is mostly descriptive in nature and not feasible to be realised in artificial agents like social robots. Therefore, we shall not indulge into the discussion of virtue ethics in this paper.
Although literature suggests that mood and personality play a dynamic role in the process of emotion generation, in this paper, we shall not discuss the relationship of mood and personality with emotion. We have integrated the notion of mood and personality in EEGS and currently investigating the relationship of those factors in the process of emotion generation.
Detailed discussion about how emotions are differentiated with varying values for the degree of their positivity and negativity is out of the scope of this paper. For further discussion on the degrees of valence of different emotions, please refer to [28] and related literature.
While the signed value of Degree was sufficient to specify the Valence as POSITIVE or NEGATIVE, we chose to consider “Valence” as an explicit parameter for the ease of computational mechanism.
The range of \([-\,1, +\,1]\) is a subjective choice. It is completely feasible to select other ranges like \([-\,10, +\,10]\) or \([-\,100, +\,100]\).
We could not find strong evidence on how long the decay time should be considered for an emotion. However, most existing emotion models were found to use the decay time of less then 10 s.
In the examples of previous paragraph, the Source was the robot itself. We have used the notion of Source to allow EEGS to be able to store also the standards about what it believes one person should behave with another person. This kind of design helps EEGS to perform ethical reasoning when two other persons recognised by it interact with each other. This property can be extremely useful in situations of multi-agent interaction.
While the scenarios were designed by the subjects, the emotion generation mechanism was dynamic and determined by the emotion system itself during the interaction.
Dementia is a mental condition in which a person experiences a gradual decrease in the ability to think and remember even the things of normal daily life.
See Table 2 for examples of actions from Rose to Lily.
As mentioned in Sect. 6.1, emotion intensities in EEGS lie in the range [0, 1], where 0 signifies very low intensity and 1 signifies very high intensity.
While we have used the Quantified Emotion as a measure of emotion dynamics in this paper, using only the emotion intensity considering the sign for positive or negative emotions also provided similar results.
It is reasonable to argue that it is not always ethical to have lowered negativity in emotional responses which can occur due to bias of an individual in favour of his/her loved ones. However, in situation of social interaction as in the case of Rose and Lily, it is desirable to have lowered negativity in emotional responses.
References
Alexander L, Moore M (2007) Deontological ethics. Stanford encyclopedia of philosophy. https://stanford.library.sydney.edu.au/entries/ethics-deontological/
Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12(3):251–261
Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17
Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 28(4):15
Bartneck C (2003) Interacting with an embodied emotional character. In: Proceedings of the 2003 international conference on designing pleasurable products and interfaces. ACM, pp 55–60
Becker-Asano C (2008) WASABI: affect simulation for agents with believable interactivity, vol 319. IOS Press, Amsterdam
Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum–Comput Stud 59(1):119–155
Callahan S (1988) The role of emotion in ethical decision-making. Hastings Cent Rep 18(3):9–14
Dias J, Mascarenhas S, Paiva A (2014) Fatima modular: towards an agent architecture with a generic appraisal framework. Emotion modeling. Springer, Berlin, pp 44–56
El-Nasr MS, Yen J, Ioerger TR (2000) Flame—fuzzy logic adaptive model of emotions. Auton Agents Multi-agent Syst 3(3):219–257
Gaudine A, Thorne L (2001) Emotion and ethical decision-making in organizations. J Bus Ethics 31(2):175–187
Gebhard P (2005) Alma: a layered model of affect. In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. ACM, pp 29–36
Gratch J, Marsella S (2004) A domain-independent framework for modeling emotion. Cogn Syst Res 5(4):269–306
Hooker J (1996) Three kinds of ethics. Carnegie mellon University. http://public.tepper.cmu.edu/ethics/three.pdf
Isen AM, Means B (1983) The influence of positive affect on decision-making strategy. Soc Cogn 2(1):18–31
Kopp S, Jung B, Lessmann N, Wachsmuth I (2003) Max—a multimodal assistant in virtual reality construction. KI 17(4):11
Lambie JA, Marcel AJ (2002) Consciousness and the varieties of emotion experience: a theoretical framework. Psychol Rev 109(2):219
Le Blanc AD (1999) Graphical user interface to communicate attitude or emotion to a computer program. US Patent 5,977,968
Marinier RP, Laird JE (2007) Computational modeling of mood and feeling from emotion. In: Proceedings of the Cognitive Science Society, vol 29
Marreiros G, Santos R, Ramos C, Neves J (2010) Context-aware emotion-based model for group decision making. IEEE Intell Syst 25(2):31–39
Marsella S, Gratch J, Petta P et al (2010) Computational models of emotion. A blueprint for affective computing—a sourcebook and manual vol 11(1), pp 21–46
Marsella SC, Gratch J (2009) Ema: a process model of appraisal dynamics. Cogn Syst Res 10(1):70–90
Ojha S, Williams MA (2016) Ethically-guided emotional responses for social robots: Should i be angry? In: International conference on social robotics. Springer, Berlin, pp 233–242
Ojha S, Williams MA (2017) A domain-independent approach of cognitive appraisal augmented by higher cognitive layer of ethical reasoning. In: Annual meeting of the Cognitive Science Society
Ojha S, Williams MA (2017) Emotional appraisal: a computational perspective. In: Annual conference on advances in cognitive systems
Ortony A, Clore GL, Collins A (1990) The cognitive structure of emotions. Cambridge University Press, Cambridge
Padgham L, Taylor G (1997) A system for modelling agents having emotion and personality. International Workshop on Intelligent Agent Systems. Springer, Berlin
Plutchik R (1997) The circumplex as a general model of the structure of emotions and personality. In: Plutchik R, Conte H.R (eds) Circumplex models of personality and emotions. American Psychological Association, Washington, DC
Quinton A (1973) Utilitarian ethics. Springer, Berlin
Reilly WN (2006) Modeling what happens between emotional antecedents and emotional consequents. Symposium on agent construction and emotions
Reilly WS (1996) Believable social and emotional agents. Tech. rep., Carnegie-Mellon University, Pittsburgh, PA
Scherer KR (2001) Appraisal considered as a process of multilevel sequential checking. Apprais Process Emot Theory Methods Res 92(120):57
White J (2015) Rethinking machine ethics in the age of ubiquitous technology. IGI Global, Pennsylvania
Acknowledgements
This research is supported by an Australian Government Research Training Program Scholarship.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest with any organisation in relation to this research.
Funding
This research was funded by the Research Scholarship provided by the University of Technology Sydney. There is no external funding associated with this research.
Rights and permissions
About this article
Cite this article
Ojha, S., Williams, MA. & Johnston, B. The Essence of Ethical Reasoning in Robot-Emotion Processing. Int J of Soc Robotics 10, 211–223 (2018). https://doi.org/10.1007/s12369-017-0459-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-017-0459-y