Keywords

1 Introduction

Emotions affect our intentions, perceptions, behaviors, and decision-making. From a patient’s perspective, emotions can influence decisions that impact his/her health. Such decisions may involve care situations wherein a patient’s loved ones must choose to continue or to end care, or they may influence whether or not a patient accepts immunizations So, when interacting with patients or health consumers, it is important to account for the role emotions play. Providers must be conscious of emotional contagion [7], where he or she could sway the patient by expressing an emotion that could affect or inspire the patient’s own emotions.

Inspired by the use of intelligent agents in health care, we surmised that if such tools were to be used with patients, they would need to include emotions as a factor. In this paper, we discuss and demonstrate a proof-of-concept software engine, VEO-Engine, that could add emotional responses to intelligent agents using ontologies and semantic web technologies.

Briefly, an ontology is a semantic-driven electronic artifact that formally represents concepts, links between concepts, and domain knowledge in a machine-readable format. This artifact is published in a machine-based syntax that assists machines to structure domain knowledge and manifests the knowledge in a format that can be shared and processed by machines. One of these syntax languages is Web Ontology Language (OWL) [13], which is the language we used in this work. OWL provides language features that perform high-level machine reasoning based on coding. Theoretically, when a machine can define and structure knowledge and concepts from a specific domain, it can further understand the domain.

1.1 Summary of Previous Work

We investigated a spectrum of emotions and how to define them for machines to understand. We translated the Ortony, Clore, and Collins’ (OCC) model of emotions [11] and the proposed revised version by Steunebrink et al. [12] into what we called the Visualized Emotion Ontology (VEO) using OWL [10]. In addition, while all but one emotion overlapped with the Paul Ekman classifications of emotions [4], we also included surprise into the ontology. In brief, the OCC model defines emotions based on emergent conditions using a composite of behavior and situations. For example, the emotion of fear is defined as a negative feeling that involves a situation pertaining to displeasure of a probable consequence. Further, OCC utilizes some semantics and logical structures that can be easily rendered to create an ontology using OWL. Lastly, we created visualizations based on evidence from published research for each of the emotions described, and each visualization linked to an emotion using the ontology. Overall, the VEO semantically defined and visualized 25 emotions [9].

We then assessed the representation of emotions by evaluating the VEO structure using semiotic theory-driven metrics and assessed the visual representations of the emotions using Amazon Mechanical Turk. The initial assessments yielded a structurally and semantically sound ontology, compared with other cognitive-related ontologies, and the individuals surveyed (\( n=1082 \)) agreed that most visualizations represented specific emotions [9].

Next, we endeavored to use the VEO in machines. This could demonstrate the usefulness of the ontology, and semantic web technologies in general, in machines that could host intelligent agents.

1.2 Research Objectives

The objective of this study was to show that, for small devices, we could use an emotion ontology to reason and query emotions. This study could further our work in developing conversational agents that include emotions in interactions with humans. Also, this may further interest in using ontologies and the semantic web to help machines express and interpret emotions with humans users.

To support our objective, we performed the following:

  1. 1.

    Developed the proof-of-concept engine that harnessed the VEO to allow for querying and interpretation of emotions using an application programming interface (API)

  2. 2.

    Tested the VEO-Engine’s functionality to query and perform reasoning for emotions.

2 Materials and Method

The VEO-Engine was developed in Java and employs the following libraries: Apache Jena [2], OWL-API [8], and the HermiT reasoner [5]. The VEO-Engine software library carries an application-specific version of the VEO, which is the core knowledge base without the imported ontologies from our previous studies. With a simpler form of the VEO, this would make it easier to test and to experiment. Also, the VEO-Engine hosts local versions of the visualized emotion images. It was deployed as a distributable JARFootnote 1 file that could be integrated with existing software applications.

Fig. 1.
figure 1

Sample encoding (in Turtle) of VEO’s emotion concept of relief. The last three lines denote the linked image and web files.

We also added sample Java GUIFootnote 2 that enabled a demonstration of two basic functionalities of the VEO-Engine-querying emotion visualizations and machine-based reasoning. SPARQL [6] was used to query emotion visualizations. Each SPARQL query was executed on the VEO, and each VEO emotion was linked to an image in the JAR file. Figure 1 shows the VEO emotion visualization for relief defined in Turtle syntax [3]. The link to the image file is handled by the property \( veo:has\_local\_image\_file \). In Fig. 1, relief visualization is assigned to the image file “relief.png”.

The second VEO-Engine function involved machine-based reasoning that harnessed the HermiT API. In order to interpret the emotion, the VEO-Engine required an input for the emotional valence (positive or negative emotion type) and the contextual data for the situation or the psychological state. Formula (1) describes the format for the reasoner to define an emotion.

$$\begin{aligned}{}[positive \ | \ negative] \ and \ [concept\_property_{1} \ [\ldots and \ concept\_property_{n} ] ] \end{aligned}$$
(1)

For example, love is defined by VEO as a [positive] emotion that involves liking something familiar [concept property := “concernsAspect some Familiar_Aspect”]. In order for the software to determine whether love is being expressed, it would need data of positive for its emotional valence parameter and the parameter for a VEO concept property(ies) of concernsAspect some Familar_Aspect.

Fig. 2.
figure 2

Interpreting user’s emotional information.

Figure 2 shows the broad process wherein an intelligent agent consumes the emotional valence data and the contextual situational data from a human user. Using the entered parameters, the HermiT reasoner enabled the VEO-Engine to determine the precise emotion based on what has been defined in the VEO.

To test the software library, we used a Raspberry Pi 3 Model 3 board with Raspian version 9. Specific to Raspberry Pi, the device was also connected to 7” touchscreen display with 800\(\,\times \,\)400 pixel screen resolution. The VEO-Engine was deployed to the device, and we executed sample tests through the command line to assess both the visualization query and the emotion reasoning of the library.

3 Results and Discussion

Aside from the input parameters we provided through the command line, the entire library was executed locally on the Raspberry Pi device and performed its functions without any connection to external software services.

Through a command line input for a specific emotion, the VEO-Engine queried for the corresponding image file and displayed a sample window showing that the visualization was linked to the emotion. Figure 3 shows anger displayed from the VEO-Engine on a Raspberry Pi device.

Fig. 3.
figure 3

Touchscreen device displaying results of a visualization query for the emotion anger.

We tested the VEO-Engine’s reasoner by feeding a string of data describing an emotion. To test, the input required:

$$ reason \ [positive | negative] \ [ concept\_property_{n}] $$

For example, the input parameters of reason [positive] [concernsConsequence some Prospective_Consequence] was revealed to be the emotion of hope. Figure 4 displays the result of a sample parameter input for hope to demonstrate the reasoning capability of VEO-Engine on a small device.

Fig. 4.
figure 4

VEO-Engine performed a reasoning task based on parameters for the emotion of hope.

While our results show promise for semantic driven technologies, there are still opportunities for improvement. One would be to allow for synonymous emotion input in visualization queries, for example, fondness in place of love. To permit this, we would need to expand the ontology to link similar terms with each emotion and then modify the SPARQL queries. These improvements are possible because ontologies are graph-like, and therefore they can be changed easier than, say, a relational database [1].

To perform reasoning functions, the VEO-Engine required structured data input, so for this technology to be further applicable, it must map or translate the noisy contextual information from the human user into structured data. Therefore, if we looked at unstructured, free text from a person’s utterances, we would need to parse out the information and then map that information to the appropriate parameters for emotional valences and concept properties to then input into the VEO-Engine. In this scenario, natural language processing might offer a direction.

4 Conclusion and Future Work

Our work exemplifies how semantic-encoded emotions could be utilized by software and small devices to assist machines in understanding human emotions. Based on our previous VEO work [9, 10], we developed VEO-Engine, a software library that interfaces with the emotion ontology. The VEO-Engine was able to query for visualizations associated with an emotion, and it was able to deduce an emotion based on sample input parameters. The combination of having emotions semantically defined and a software wrapper to interface with the ontology makes semantic web technologies a feasible option for affective computing. In the future, we will look to incorporating this work into conversational agents for health care applications. Specifically, this could enhance how machines react and respond to patients’ or health consumers’ utterances to improve their outlook and well-being.