Effect of Parameter Tuning at Distinguishing Between Real and Posed Smiles from Observers’ Physiological Features

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10637)

Abstract

To find the genuineness of a human behavior/emotion is an important research topic in affective and human centered computing. This paper uses a feature level fusion technique of three peripheral physiological features from observers, namely pupillary response (PR), blood volume pulse (BVP), and galvanic skin response (GSR). The observers’ task is to distinguish between real and posed smiles when watching twenty smilers’ videos (half being real smiles and half are posed smiles). A number of temporal features are extracted from the recorded physiological signals after a few processing steps and fused before computing classification performance by k-nearest neighbor (KNN), support vector machine (SVM), and neural network (NN) classifiers. Many factors can affect the results of smile classification, and depend upon the architecture of the classifiers. In this study, we varied the K values of KNN, the scaling factors of SVM, and the numbers of hidden nodes of NN with other parameters unchanged. Our final experimental results from a robust leave-one-everything-out process indicate that parameter tuning is a vital factor to find a high classification accuracy, and that feature level fusion can indicate when more parameter tuning is needed.

Keywords

Physiological features Real smile Posed smile Observers Parameter tuning k-nearest neighbor Support vector machine Neural network 

References

  1. 1.
    Calvo, M.G., Gutiérrez-García, A., Del Líbano, M.: What makes a smiling face look happy? Visual saliency, distinctiveness, and affect. Psychol. Res. 1–14 (2016)Google Scholar
  2. 2.
    Libralon, G.L., Romero, R.A.F.: Investigating facial features for identification of emotions. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8227, pp. 409–416. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-42042-9_51 CrossRefGoogle Scholar
  3. 3.
    Beaudry, O., Roy-Charland, A., Perron, M., Cormier, I., Tapp, R.: Featural processing in recognition of emotional facial expressions. Cogn. Emot. 28(3), 416–432 (2014)CrossRefGoogle Scholar
  4. 4.
    Dibeklioğlu, H., Salah, A.A., Gevers, T.: Recognition of genuine smiles. Trans. Multimedia 17(3), 279–294 (2015)CrossRefGoogle Scholar
  5. 5.
    Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33(1), 17–34 (2009)CrossRefGoogle Scholar
  6. 6.
    Frank, M.G., Ekman, P., Friesen, W.V.: Behavioral markers and recognizability of the smile of enjoyment. J. Pers. Soc. Psychol. 64(1), 83–93 (1993)CrossRefGoogle Scholar
  7. 7.
    Hoque, M.E., McDuff, D.J., Picard, R.W.: Exploring temporal patterns in classifying frustrated and delighted smiles. Trans. Affect. Comput. 3(3), 323–334 (2012)CrossRefGoogle Scholar
  8. 8.
    Kim, J., Andre, E.: Emotion recognition based on physiological changes in music listening. Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008)CrossRefGoogle Scholar
  9. 9.
    Gong, P., Ma, H.T., Wang, Y.: Emotion recognition based on the multiple physiological signals. In: International Conference on Real-Time Computing and Robotics, pp. 140–143. IEEE, Angkor Wat (2016)Google Scholar
  10. 10.
    Hossain, M.Z., Gedeon, T., Sankaranarayana, R., Apthorp, D., Dawel, A.: Pupillary responses of Asian observers in discriminating real from fake smiles: a preliminary study. In: 10th International Conference on Methods and Techniques in Behavioral Research, pp. 170–176. Measuring Behavior, Dublin (2016)Google Scholar
  11. 11.
    Xia, V., Jaques, N., Taylor, S., Fedor, S., Picard, R.: Active learning for electrodermal activity classification. In: Signal Processing in Medicine and Biology Symposium, pp. 1–6. IEEE (2015)Google Scholar
  12. 12.
    Hossain, M.Z., Gedeon, T., Sankaranarayana, R.: Observer’s galvanic skin response for discriminating real from fake smiles. In: 27th Australian Conference on Information Systems, pp. 1–8. University of Wollongong Faculty of Business, Wollongong (2016)Google Scholar
  13. 13.
    Peper, E., Harvey, R., Lin, I., Tylova, H., Moss, D.: Is there more to blood volume pulse than heart rate variability, respiratory sinus arrhythmia, and cardiorespiratory synchrony? Biofeedback 35(2), 54–61 (2007)Google Scholar
  14. 14.
    Dibeklioğlu, H., Salah, A.A., Gevers, T.: Are you really smiling at me? Spontaneous versus posed enjoyment smiles. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 525–538. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33712-3_38 CrossRefGoogle Scholar
  15. 15.
    Soleymani, M., Lichtenauer, J., Pun, T., Pantic, M.: A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3(1), 42–55 (2012)CrossRefGoogle Scholar
  16. 16.
    Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: International Conference on Multimedia and Expo, p. 5. IEEE, Amsterdam (2005)Google Scholar
  17. 17.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. In: Conference on Computer Vision and Pattern Recognition, pp. 94–101. IEEE, San Francisco (2010)Google Scholar
  18. 18.
    Willenbockel, V., Sadr, J., Fiset, D., Horne, G.O., Gosselin, F., Tanaka, J.W.: Controlling low-level image properties: the SHINE toolbox. Behav. Res. Methods 42(3), 671–684 (2010)CrossRefGoogle Scholar
  19. 19.
    Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: analysis of affective physiological state. Trans. Pattern Anal. Mach. Intell. 23(10), 1175–1191 (2001)CrossRefGoogle Scholar
  20. 20.
    Chih-Min, M., Wei-Shui, Y., Bor-Wen, C.: How the parameters of k-nearest neighbor algorithm impact on the best classification accuracy: in case of parkinson dataset. J. Appl. Sci. 14, 171–176 (2014)CrossRefGoogle Scholar
  21. 21.
    Romero, R., Iglesias, E.L., Borrajo, L.: A linear-RBF multikernel SVM to classify big text corpora. BioMed Res. Int. 1–14 (2015)Google Scholar
  22. 22.
    Zou, W., Li, Y., Tang, A.: Effects of the number of hidden nodes used in a structured-based neural network on the reliability of image classification. Neural Comput. Appl. 18(3), 249–260 (2009)CrossRefGoogle Scholar
  23. 23.
    Chen, J., Leong, Y.C., Honey, C.J., Yong, C.H., Norman, K.A., Hasson, U.: Shared memories reveal shared structure in neural activity across individuals. Nat. Neurosci. 20(1), 115–125 (2017)CrossRefGoogle Scholar
  24. 24.
    Mendis, B.S.U., Gedeon, T.D., Kóczy, L.T.: Investigation of aggregation in fuzzy signatures. In: 3rd International Conference on Computational Intelligence, Robotics and Autonomous Systems, pp. 17–31. CIRAS and FIRA Organising Committee, Singapore (2005)Google Scholar
  25. 25.
    Mendis, B.S.U., Gedeon, T.D., Koczy, L.T.: On the issue of learning weights from observations for fuzzy signatures. In: World Automation Congress, pp. 1–6. IEEE Press (2006)Google Scholar
  26. 26.
    Treadgold, N.K., Gedeon, T.D.: A cascade network algorithm employing progressive RPROP. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds.) IWANN 1997. LNCS, vol. 1240, pp. 733–742. Springer, Heidelberg (1997). doi: 10.1007/BFb0032532 CrossRefGoogle Scholar
  27. 27.
    Khan, M.S., Chong, A., Gedeon, T.D.: A methodology for developing adaptive fuzzy cognitive maps for decision support. JACIII 4(6), 403–407 (2000)CrossRefGoogle Scholar
  28. 28.
    Tikk, D., Biró, G., Gedeon, T.D., Kóczy, L.T., Yang, J.D.: Improvements and critique on Sugeno’s and Yasukawa’s qualitative modeling. IEEE Trans. Fuzzy Syst. 10(5), 596–606 (2002)CrossRefGoogle Scholar
  29. 29.
    Asthana, A., Gedeon, T., Goecke, R., Sanderson, C.: Learning-based face synthesis for pose-robust recognition from single image. In: British Machine Vision Conference, pp. 1–10. British Machine Vision Association and Society for Pattern Recognition (2009)Google Scholar
  30. 30.
    Asthana, A., Goecke, R., Quadrianto, N., Gedeon, T.: Learning based automatic face annotation for arbitrary poses and expressions from frontal images only. In: Computer Vision and Pattern Recognition CVPR, pp. 1635–1642. IEEE Press (2009)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Research School of Computer ScienceAustralian National UniversityCanberraAustralia

Personalised recommendations