Abstract
To study the variation in emotional responses to stimuli, different methods have been developed to elicit emotions in a replicable way. Using video clips has been shown to be the most effective stimuli. However, the differences in cultural backgrounds lead to different emotional responses to the same stimuli. Therefore, we compared the emotional response to a commonly used emotion eliciting video clips from the Western culture on Saudi culture with an initial selection of emotion eliciting Arabic video clips. We analysed skin physiological signals in response to video clips from 29 Saudi participants. The results of the validated English video clips and the initial Arabic video clips are comparable, which suggest that a universal capability of the English set to elicit target emotions in Saudi sample, and that a refined selection of Arabic emotion elicitation clips would improve the capability of inducing the target emotions with higher levels of intensity.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Recognizing emotions had a great interest lately, not only to create an intelligent and advanced affective computing system, but also to have a better understanding of human psychology, neurobiology and sociology. For instance, an intelligent tutoring application uses emotional states to interactively adjust the lessons’ content and tutorial level. For example, an intelligent tutoring application could recommend a break when a fatigue sign is detected [1]. Another example is an emotion-responsive car, which detects drivers’ state, such as level of stress, fatigue or anger that might impair their driving abilities [2]. Recognizing the emotional state has potential to play a vital role in different applied domains. For example, it can provide insights for helping psychologists diagnose depression [3], detect deception [1, 4], and identify different neurological illnesses such as schizophrenia [5].
To study and build any emotion intelligent system, emotional responses have to be collected in a measurable and replicable way, where a standardized method to elicit emotions is required. There are several methods to elicit a universal emotional state [6]. The most effective way is through emotion elicitation video clips, where the most popular set of clips is Gross and Levenson’s set [7].
However, researchers found that emotional expressions are different between cultures, which often rely on emotion triggers [8]. The essential differences between southern and northern Americans’ reactions to the same emotion eliciting stimuli were explained in [8]. They reported three sources of variability: genes, environment and acquired different beliefs, values and skills [8]. Therefore, the same emotion elicitation stimuli could produce different results based on the subject’s cultural background.
Studies that explore emotional responses to stimuli on Arab cultures are rare. Given the Arab conservative culture [9], it is not possible to predict the effect of the cultural acceptance of stimuli content to the emotional response. Therefore, in this paper we compare and evaluate emotional responses to emotion elicitation clips from a commonly used Western set [7] with an initially selected Arabic set.
2 Background
For the past few decades, affective state recognition has been an active research area and has been used in many contexts [1]. For example, emotion recognition research involved psychology, speech analysis, computer vision, machine learning and many others. With all the advantages that the diversity of these research areas introduce, it also comes with controversial approaches to achieve an accurate recognition of affect. For example, defining emotion is controversial due to the complexity of emotional processes. Psychologically, emotions are mixed with several terms such as affect and feeling. Moreover, emotions are typically conjoined with sensations, moods, desires and attitudes [10]. In [11], emotion was defined as spiritual and uncontrollable feelings that affect our behavior and motivation. In literature, emotional states have been divided into different categories such as positive and negative [12], and into dimensions such as valence and arousal dimension [13], as well as into a discreet set of emotions such as Ekman’s basic emotions [14]. In this paper, we use Ekman’s six basic emotions (happiness, sadness, surprise, fear, anger and disgust) [14]. This was mainly for its simplicity and its universality between cultures, which would reduce variability that might affect this investigation of cultural differences [8].
Emotions can be expressed via several channels and the most common channels are facial expressions and speech prosody [15]. Other channels include physiological signals such as: heart rate, skin conductivity, brain waves, etc. Physiological signals occur spontaneously, since they reflect the activity of the nervous system. Therefore, these signals cannot be suppressed as with the facial and vocal expressions. Ekman argued that emotions have discriminative pattern generated by the Autonomic Nervous System (ANS) [16]. These patterns reflect the changes in human physiological signals when different emotions were elicited. Ax was the first to observe that ANS response is different between fear and anger [17]. This observation led to several attempts, which have been conducted, to analyse emotional physiological response patterns. In this work, we analyse skin conductivity as a physiological response to emotion elicitation, which include skin conductance and temperature.
Electro-dermal Activity (EDA) represents electrical changes in the skin due to the activity of sweat glands that draw input from the sympathetic nervous system. Several studies noted that high EDA is linked to increased emotional arousal [18, 19], which has also been used as a polygraph lie indicator [4]. Moreover, high level of arousal is an indication of high difficulty of task level [20], challenge, frustration, and excitement [19]. EDA response to stimuli contains two measures: skin conductance level (SCL), which is a tonic reaction, and skin conducted response (SCR), which reflects phasic reactions. Researchers have found that tonic parameter is most likely to reveal the general state of arousal and alertness while phasic parameter is useful for studying attentional processes [21]. Interestingly, a significant increase of SCL is more responsive to negative emotions [22] especially those associated with fear [21, 23] whereas a decrease is associated with pleasure emotion states [23].
Skin temperature fluctuates due to the change in blood flow caused by vascular resistance or arterial blood pressure. A complicated model of cardiovascular was used to describe arterial blood pressure variations, which is modulated by the autonomic nervous system. Therefore, skin temperature has been used as autonomic nervous system activity indicator, and as an effective indicator of emotional state. Using skin temperature measurements, several studies found that fear is characterized by a large decrease in skin temperature, while anger and pleasure is characterized by an increase in skin temperature [16, 21, 23].
Despite the various studies conducted on emotion expressions, studies on cultural differences in expressing emotions are still in their infancy. As an attempt to investigate the cultural aspect, eliciting emotions using the validated clips in [7] has been used in different cultures. In [24], 45 German subjects participated in a study by rating the elicited emotions from four video clips from [7] set. They concluded that the selected video clips were able to elicit the target emotions in German culture. Moreover, 31 Japanese volunteers were invited to watch English video clips selected from [7] that elicit six different emotions [25]. The study concluded that most of the video clips have a universal capability to elicit the target emotions. However, due to the unique characteristics of the Japanese culture, some of the clips have elicited non-target emotions as well. For example, one of the video clips that elicit amusement also induced surprise, interest, and embarrassment [25].
Given the differences of Arab culture compared with the Western and Japanese cultures, it is not clear whether the [7] set of emotional elicitation clips have the capability of eliciting the target emotions in Arab subjects. Therefore, in this work, we investigate and compare emotional responses to the clips in [7] with an initial selection of Arabic emotion elicitation clips on Saudi subjects. The purpose of the Arabic set is to investigate the cultural acceptance effect on the elicited emotions in comparison with the English set. For validation, skin physiological responses are measured and analysed to identify the differences between emotional responses in both English and Arabic emotion elicitation video clips.
3 Method
3.1 Stimuli and Data Collection
In this study, we investigate the six emotions suggested by Ekman [14]. To elicit these emotions, one video clip for each emotion was selected from [7] emotion elicitation clips. Even though the [7] set of video clips have more than one clip for each emotion with different levels of eliciting target emotions, the selected video clips in this study were based on complying to the Arabs’ ethical and cultural constraints. Thus, the video clips used here are not selected based on the highest level of eliciting target emotions. For example, the ’When Harry MetSally’ clip acquired the highest level to elicit amusement among two other clips [7]. However, this clip was perceived as inappropriate in the relatively conservative Saudi culture. For this reason, the second highest level of eliciting amusement acquired by the ’Bill Cosby, Himself’ clip has been selected in this study, despite that it contains some cursing and disrespecting words. The final selected clips in this paper are shown in Table 1. In order to save original emotion expressions from the speech, clips were not dubbed, yet had Arabic subtitles. In addition to the selected English clips, initial Arabic clips set have been selected from a small poll of video clips gathered by the project team members. The selected clips are shown in Table 1.
While 29 volunteers were watching the emotion elicitation clips, skin response was measured using the Affectiva Q-sensor, which is a wireless wrist-worn sensor. All 29 participants in the sample were Saudi females, age ranged from 18-45Â years, and all had normal or corrected-to-normal vision. Prior to the recording, the aims and procedures were explained to the participants, then a consent and a demographic questionnaire were obtained individually.
An interface was coded to automatically show the 12 emotion elicitation clips (both English and Arabic) in a specific order. To reduce the emotional fluctuation between two different emotions, English video clips were played followed by Arabic video clips for the same emotion in the following order: amusement, sadness, anger, fear, surprise, and disgust. In order to clear the subject’s mind, a five seconds count down black screen was shown between clips. Participants were asked to watch the video clips as they do at home and feel free to move their head. By the end of the recording, participants were asked to answer a post-questionnaire and rate the elicited emotions they felt while watching each clip. The emotion rating scale range from (0 not felt -10 felt strongly), which is a modified Likert scale [26]. Since not every video clip had elicited the target emotion in all participants, only segments where the participants have felt the target emotions are included in analysis (i.e. target emotion self-rating \(\ge \)1). This selection criteria gave an average of 22 segments per clip as detailed in Table 2.
3.2 Feature Preparation and Extraction
As mentioned earlier, the Affectiva Q-sensor was used to record the skin responses to emotional stimuli. The sensor was not attached to the computer, however, its raw data is time-stamped. Therefore, to prepare the data for analysis, the time-stamp was used to segment the data for each subject and each clip. This pre-processing step was performed offline using a Matlab code, which matches the time-stamped data with the starting time and duration of each clip for each subject’s recording. Moreover, most video clips start with setting the scene and giving a background as preparation before eliciting the target emotion. Hence, in our analysis we only used the data from the clips where the target emotions were presumed to be elicited. The start and end time of each clip where the target emotions are elicited are presented in Table 1.
The Q-sensor records several raw data features in 32 sampling rate (32 fps). These raw features could be categorized as: (1)Electro-dermal activity features, and (2) skin temperature. As mentioned in the background section, the EDA features are divided into two main categories: skin conductance level and skin conductance response. In this work, we extract several statistical features from SCL, SCR and skin temperature data. Moreover, initial feature extraction and analysis from SCR data were performed, where none of the extracted features were effective for the analysis of emotional response, which is inline with the literature [21]. Therefore, only SCL and skin temperature are included in this study. From both SCL and skin temperature data, we extracted:
-
Max, Min, Mean, Range, STD, and Variance (6 \(\times \) 2).
-
Max, Min, Means, Range, STD and variance of the slops (6 \(\times \) 2).
-
Max, Min and Average values of the peaks and valleys (3 \(\times \) 2 \(\times \) 2).
-
Average number of peaks and valleys in second (1 \(\times \) 2).
-
Duration (number of frames) above and below a threshold (i.e. 0.01 for SCL and 37° for skin temperature) (2 \(\times \) 2).
-
Max, Min, Means, Range, STD and Variance of temperature velocity and acceleration (6\(\times \)2\(\times \)2).
3.3 Analysis and Classification
The extracted features were analysed using a one-way between groups analysis of variance (ANOVA). ANOVA is a statistical test to study the difference between groups on a variable. In our case, the test was one-way between groups ANOVA using the six emotions as the groups for each of the extracted features, with significance level \(p <= 0.01\). ANOVA test was used here for two purposes: (1)Â to identify the features that significantly differentiate emotions, and (2) as a feature selection method to reduce the data dimensionality of the classification problem. To insure affair comparison, features extracted from the English and Arabic clips are analysed individually, and then the common features that are significantly different in all emotion groups in both languages are selected for the classification.
For classifying emotions automatically using the selected features, we used Support Vector Machine (SVM) classifier. SVM is one of the discriminative classifiers that separate classes based on the concept of decision boundaries. Lib SVM [27] was used as an implementation of SVM in this paper. To increase the accuracy of the classification result, SVM parameters must be adjusted. Therefore, we searched for the best gamma and cost parameters using a wide range grid search with radial basis function (RBF) kernel. The classification is performed in multiclass (6 emotion classes) in a subject-independent scenario. Since the SVM is mainly used to discriminate binary classes, we used one-verses-one approach for multiclass classification. In the one-verses-one approach, several SVMs are constructed to separate each pair of classes, and then a final decision is made by the majority voting of the constructed classifiers. Moreover, to mitigate the effect of the limited amount of data, a leave-one-segment-out cross-validation is used without any overlaps between training and testing data. Dealing with features of different scales a normalization is recommended, to ensure that each individual feature has an equal effect on the classification problem. In this work, we used Min-Max normalization, which scales the values between 0 to 1 and preserve the relationships in the data.
The emotion elicitation self-ratings of the target emotions reported by the subjects were statistically analysed to compare the English and Arabic video clips in emotion elicitation. To compare the emotion elicitation self-ratings of the target emotion from the two language group video clips, two-sample two-tailed t-test was used assuming unequal variance with Significance \(p=0.05\).
4 Results
To compare the emotion elicitation levels between the English and Arabic clips, the reported self-rating of target emotions were analysed statistically using T-test. The average self-rate of target emotions of each clip is illustrated in Fig. 1. Using T-test, only amusement and sadness clips were significantly different in eliciting the target emotion from Arabic and English video clips, as marked in Fig. 1. The results showed that both English and Arabic clips induced the target emotion similarly, with the exception of amusement and sadness clips. Moreover, this reveals that the English videos clips have a universal capability to induce the target emotions, with the exception of the amusement clip. For the complexity of jokes interpretation, it has been recommended to use amusement clips from participants’ cultural background [28]. Furthermore, the similarity in inducing emotions from an initial set of Arabic emotion elicitation clips compared to the validated English clips, suggests that a refined selection of Arabic emotion elicitation clips might improve the levels and intensity of the induced emotions. A full framework for developing a refined selection of emotion elicitation clips for Arab participants has been suggested in [29].
In order to characterize the skin physiological changes in response to emotional stimuli, we statistically analyse the extracted features. Table 3 shows the features that are significantly different between emotions (\(p \leqslant 0.01\)) using ANOVA test in English and Arabic clips. Analysing EDA features, only the duration above and below the 0.01 thresholdFootnote 1 were significant between emotions. This result is inline with [2], where the duration features of EDA has been found to be robust for emotion recognition. Moreover, analysing skin temperature features, several features were significantly different between emotions inline with [30]. An increase or a decrease in skin temperature associated with emotional states have been found in several studies [16, 21, 23]. Inline with these studies, skin temperature changes represented by velocity, acceleration and slop of the temperature values were statistically significant in this study to differentiate emotions. We have also used the normal body temperature (37°) as a threshold, where the duration of the temperature being below 37° was significantly different between emotions.
Moreover, the ANOVA test was performed on the extracted features from English and Arabic emotion elicitation clips individually (see Table 3). Most features were commonly significant between emotions in both English and Arabic clip sets, which could suggest that both language video clip sets were able to elicit the target emotions. Arabic clips features had a few more features that were significant between emotions than the features extracted from English elicitation clips. These features include changes in temperature represented by velocity and slop statistical features (as shown in Table 3). On the other hand, only two features (minimum and mean of temperature valleys) were significant in English elicitation clips but not in Arabic elicitation clips. The differences in the significant features between English and Arabic emotion elicitation clips might be caused by the different levels of eliciting the target emotions.
In order to further compare and validate the EDA response to emotion elicitation from English and Arabic video clips, we classified the elicited six emotions using SVM. To ensure fair comparisons of English and Arabic clips classification, we used the common features that were significant between emotions using ANOVA in both English and Arabic clip sets (see Table 3). Figure 2 shows the confusion matrix of the multiclass emotion classification from Arabic and English clips individually.
Comparing the overall accuracy of classifying emotions from English and Arabic clips, English clips performed a higher classification accuracy (94Â %) compared to the Arabic classification accuracy (70Â %). The lower classification in the Arabic emotion elicitation clips might be caused by one of two reasons: (1) the level of eliciting the target emotion might not be consistent between subjects, or (2) more of the significant features in Arabic clips were not common with the English clips where they have not been selected. This reduction in the selected features might have an effect on the final classification result of the Arabic clips.
The confusion matrix of emotion classification from the English clips has less confusion than the Arabic ones. Looking at the confusion matrix of the Arabic clips, sadness and disgust emotions have been confused with each other in both sadness and disgust emotion elicitation clips. The low classification of sadness from Arabic clips could be justified by the low average self-rate of sadness. However, the classification of amusement in the English clip was high, even though the average self-rate was low. Furthermore, the disgust classification is also below chance levels, even though the average self-rate was high. At this point, it is difficult to find a correlation between the self-rate and the classification result, where more data is needed. Furthermore, a regression classification could be used in combination with the emotion classification, where the self-rate of the target emotion could be used to detect the level of arousal of the targeted emotion.
When classifying multiple emotions, emotions with similar features are often confused. For example, when using speech features to detect emotions, joy and surprise or anger and surprise, were often confused [31], and when using facial expressions, the most likely confused expressions were sadness with neutral or fear, and disgust with anger [32, 33]. To overcome this issue, some studies group the confused emotions in one class and then use separate classifiers across groups as well as within the same group to get more accurate results as in [34, 35]. However, the confusion of sadness and disgust emotions only occurred in Arabic elicitation clips, which suggests that the features used for the classification are not differentiating these emotions. That might be caused by the reduction in the selected features from the Arabic clips.
Given the English clips were successful in eliciting the target emotions, the classification results are encouraging, which suggests the robustness of using the skin response features in emotion recognition. However, since the self-rate of the target emotions in English clips reported by the participants varies, more data are required to confirm this results.
5 Conclusion
Recognizing emotional state, contribute in building an intelligent application in affective computing systems, education and psychology. Several methods have been developed to elicit emotions in a replicable way in order to study emotional response patterns, where emotion elicitation clips are the most effective method. Given cultural and language differences, the same stimuli might not have a similar emotional response between subjects from different backgrounds. In this paper, we compare emotional responses from English emotion elicitation clips from [7] with an initial selection of Arabic emotion elicitation clips on a Saudi Arabian sample.
Focusing on the universal six emotions suggested by Ekman, we measured skin physiological signals in response to emotion elicitation clips on 29 Saudi female participants. Analysing the self-rate of targeted emotions reported by the participants between the English clips and the initial Arabic clips, most self-rate were not statistically significant between the two language clips, the exceptions were in the amusement and sadness clips. This finding has two folds: (1) the English clips have a universal capability to elicit target emotions in Saudi sample, and (2) a refined selection of Arabic emotion elicitation clips will be beneficial in eliciting the targeted emotions with higher levels of intensity.
To characterize emotions, we extracted several statistical features from the skin response signal, which have been analysed statistically using ANOVA. Moreover, to compare the emotion elicitation from the English and Arabic clips, the ANOVA tests were performed on the two language clip sets individually. The significant features that differentiate emotions based on the ANOVA test are inline with the literature, and most of them were significant in both English and Arabic clips. The similarity in the significant features from English and Arabic, suggests that both language clip sets have the ability to elicit the target emotions. On the other hand, the differences in these features between the two language clip sets could indicate differences in the intensity of eliciting the target emotions.
To further compare and validate the effectiveness of skin response features in differentiating emotions, we classify the elicited six emotions using multiclass SVM. The common features in English and Arabic clips that were significant based on ANOVA test are used in the classification. Classifying emotions from English and Arabic clips were performed individually for comparison, where the English set performed higher (94Â %) than the Arabic set (70Â %). The lower classification result in the Arabic set compared to the English set is caused by the confusion of classifying sadness and disgust with each other, which might be caused by the reduction in the selected features. Nevertheless, the high performance in classifying emotions from the English set suggests robustness of using the skin response features in emotion recognition, given that the clips are successful in inducing the target emotions.
6 Limitation and Future Work
A known limitation of this study is the relatively small sample size, and that the participants are drawn from a narrow Saudi region. Moreover, the comparable performance of the initial selection of the Arabic emotion elicitation clips to the validated English clips suggest that refined selection of Arabic emotion elicitation clips might improve the levels and intensity of the induced emotions. Future work will aim to develop such a refined selection of emotion elicitation clips for Arab participants as suggested in [29].
Notes
- 1.
Note that the threshold of 0.01 is widely used for the analysis of EDA signal [23].
References
Fragopanagos, N., Taylor, J.G.: Emotion recognition in human-computer interaction. Neural Networks 18(4), 389–405 (2005)
Singh, R.R., Conjeti, S., Banerjee, R.: A comparative evaluation of neural network classifiers for stress level analysis of automotive drivers using physiological signals. Biomed. Signal Process. Control 8(6), 740–754 (2013)
Alghowinem, S.: From joyous to clinically depressed: Mood detection using multimodal analysis of a person’s appearance and speech. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), pp. 648–654, September 2013
Jiang, L., Qing, Z., Wenyuan, W.: A novel approach to analyze the result of polygraph. In: 2000 IEEE International Conference on Systems, Man, and Cybernetics, Volume 4,pp. 2884–2886. IEEE (2000)
Park, S., Reddy, B.R., Suresh, A., Mani, M.R., Kumar, V.V., Sung, J.S., Anbuselvi, R., Bhuvaneswaran, R., Sattarova, F., Shavkat, S.Y., et al.: Electro-dermal activity, heart rate, respiration under emotional stimuli in schizophrenia. Int. J. Adv. Sci.Technol. 9, 1–8 (2009)
Westermann, R., Spies, K., Stahl, G., Hesse, F.W.: Relative effectiveness and validity of mood induction procedures: a meta-analysis. Eur. J. Soc. Psychol. 26(4), 557–580 (1996)
Gross, J.J., Levenson, R.W.: Emotion elicitation using films. Cogn. Emot. 9(1), 87–108 (1995)
Richerson, P.J., Boyd, R.: Not by Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press, Chicago (2008)
Al-Saggaf, Y., Williamson, K.: Online communities in saudi arabia: Evaluating the impact on culture through online semi-structured interviews. In: Forum Qualitative Sozialforschung/Forum: Qualitative Social Research, vol. 5 (2004)
Solomon, R.C.: The Passions: Emotions And The Meaning of Life. Hackett Publishing, Cambridge (1993)
Boehner, K., DePaula, R., Dourish, P., Sengers, P.: How emotion is made and measured. Int. J. Hum.-Comput. Stud. 65(4), 275–291 (2007)
Cacioppo, J.T., Gardner, W.L., Berntson, G.G.: The affect system has parallel and integrative processing components: form follows function. J. Pers. Soc. Psychol. 76(5), 839 (1999)
Jaimes, A., Sebe, N.: Multimodal human-computer interaction: a survey. Comput. Vis. Image Underst. 108(1), 116–134 (2007)
Dalgleish, T., Power, M.J.: Handbook of cognition and emotion. Wiley Online Library (1999)
Brave, S., Nass, C.: Emotion in human-computer interaction. In: The Human-computer Interaction Handbook: Fundamentals, Evolving Technologies And Emerging Applications, pp. 81–96 (2002)
Ekman, P., Levenson, R.W., Friesen, W.V.: Autonomic nervous system activity distinguishes among emotions. Science 221(4616), 1208–1210 (1983)
Ax, A.F.: The physiological differentiation between fear and anger in humans. Psychosom. Med. 15(5), 433–442 (1953)
Kim, K.H., Bang, S., Kim, S.: Emotion recognition system using short-term monitoring of physiological signals. Med. Biol. Eng.Comput. 42(3), 419–427 (2004)
Mandryk, R.L., Atkins, M.S.: A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies. Int. J. Hum.-Comput. Stud. 65(4), 329–347 (2007)
Frijda, N.H.: The Emotions. Cambridge University Press, Cambridge (1986)
Henriques, R., Paiva, A., Antunes, C.: On the need of new methods to mine electrodermal activity in emotion-centered studies. In: Cao, L., Zeng, Y., Symeonidis, A.L., Gorodetsky, V.I., Yu, P.S., Singh, M.P. (eds.) ADMI. LNCS, vol. 7607, pp. 203–215. Springer, Heidelberg (2013)
Drachen, A., Nacke, L.E., Yannakakis, G., Pedersen, A.L.: Correlation between heart rate, electrodermal activity and player experience in first-person shooter games. In: Proceedings of the 5th ACM SIGGRAPH Symposium on Video Games, pp. 49–54. ACM (2010)
Boucsein, W.: Electrodermal Activity. Springer, New York (2012)
Hagemann, D., Naumann, E., Maier, S., Becker, G., Lürken, A., Bartussek, D.: The assessment of affective reactivity using films: validity, reliability and sex differences. Personality Individ. Differ. 26(4), 627–639 (1999)
Sato, W., Noguchi, M., Yoshikawa, S.: Emotion elicitation effect of films in a Japanese sample. Soc. Behav. Pers. Int. J. 35(7), 863–874 (2007)
Likert, R.: A technique for the measurement of attitudes. Archiv. Psychol. 22(140), 1–55 (1932)
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27:1–27:27 (2011). http://www.csie.ntu.edu.tw/cjlin/libsvm
Raskin, V.: Semantic Mechanisms of Humor, vol. 24. Springer, Netherlands (1985)
Alghowinem, S., Alghuwinem, S., Alshehri, M., Al-Wabil, A., Goecke, R., Wagner, M.: Design of an emotion elicitation framework for arabic speakers. In: Kurosu, M. (ed.) HCI 2014, Part II. LNCS, vol. 8511, pp. 717–728. Springer, Heidelberg (2014)
Haag, A., Goronzy, S., Schaich, P., Williams, J.: Emotion recognition using bio-sensors: first steps towards an automatic system. In: André, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 36–48. Springer, Heidelberg (2004)
Cahn, J.E.: The generation of a ect in synthesized speech. J. Am. Voice I/O Soc. 8, 1–19 (1990)
Cohen, I., Sebe, N., Garg, A., Chen, L.S., Huang, T.S.: Facial expression recognition from video sequences: temporal and static modeling. Comput. Vis. Image Underst. 91(1), 160–187 (2003)
Soyel, H., Demirel, H.O.: Facial expression recognition using 3D facial feature distances. In: Kamel, M.S., Campilho, A. (eds.) ICIAR 2007. LNCS, vol. 4633, pp. 831–838. Springer, Heidelberg (2007)
Tato, R., Santos, R., Kompe, R., Pardo, J.M.: Emotional space improves emotion recognition. In: INTERSPEECH (2002)
Yacoub, S.M., Simske, S.J., Lin, X., Burns, J.: Recognition of emotions in interactive voice response systems. In: INTERSPEECH (2003)
Acknowledgment
The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding the work through the research group project number RGP-VPP-157.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Al-Mutairi, N., Alghowinem, S., Al-Wabil, A. (2015). Comparison of User Responses to English and Arabic Emotion Elicitation Video Clips. In: Rau, P. (eds) Cross-Cultural Design Methods, Practice and Impact. CCD 2015. Lecture Notes in Computer Science(), vol 9180. Springer, Cham. https://doi.org/10.1007/978-3-319-20907-4_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-20907-4_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20906-7
Online ISBN: 978-3-319-20907-4
eBook Packages: Computer ScienceComputer Science (R0)