Employing Kaze Features for the Purpose of Emotion Recognition

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 710)

Abstract

In this research, a novel approach for emotion detection is exploited by taking the Accelerated Kaze (A-Kaze) features for emotion recognition. The Kaze Features work in a way such that object boundaries can be preserved by making blurring locally adaptive to the image data without severely affecting the noise-reducing capability of the Gaussian blurring, thereby increasing the accuracy of the system. After extracting the Kaze features, GMM is constructed and thus a Fisher Vector representation is made. The extracted features are passed through an SVM detector. An efficiency of 87.5% has been shown thus proving that Kaze can also be used effectively in the field of facial image processing.

Keywords

Real time Emotion recognition Kaze features State of mind Accelerated Kaze Facial expressions 

References

  1. 1.
    L. S. Chen. Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction. PhD thesis, University of Illinois at Urbana- Champaign, Dept. of Electrical Engineering, 2000.Google Scholar
  2. 2.
    L. S. Chen, H. Tao, T. S. Huang, T. Miyasato, and R. Nakatsu. Emotion recognition from audiovisual information. In Proc. IEEE Workshop on Multimedia Signal Processing, pages 83–88, Los Angeles, CA, USA, Dec. 7–9, 1998. s.Google Scholar
  3. 3.
    Kaltwang, S., Rudovic, O., & Pantic, M. (2012). Continuous pain intensity estimation from facial expressions. Advances in visual computing, 368–377.Google Scholar
  4. 4.
    Cohn, Jeffrey F., et al. “Detecting depression from facial actions and vocal prosody.” Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on. IEEE, 2009.Google Scholar
  5. 5.
    Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of personality and social psychology, 17(2), 124.Google Scholar
  6. 6.
    Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010, June). The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on (pp. 94–101). IEEE.Google Scholar
  7. 7.
    Viola, Paul, and Michael Jones. “Rapid object detection using a boosted cascade of simple features.” Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.Google Scholar
  8. 8.
    Rathee, N., Vaish, A., & Gupta, S. (2016, April). Adaptive system to learn and recognize emotional state of mind. In Computing, Communication and Automation (ICCCA), 2016 International Conference on (pp. 32–36). IEEE.Google Scholar
  9. 9.
    Bay, H., Tuytelaars, T., & Van Gool, L. (2006). Surf: Speeded up robust features. Computer vision–ECCV 2006, 404–417.Google Scholar
  10. 10.
    Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91–110.Google Scholar
  11. 11.
    Alcantarilla, P., Bartoli, A., & Davison, A. (2012). KAZE features. Computer Vision–ECCV 2012, 214–227.Google Scholar
  12. 12.
    Alcantarilla, P. F., & Solutions, T. (2011). Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell, 34(7), 1281–1298.Google Scholar
  13. 13.
    Alcantarilla, P. F., Bergasa, L. M., Jiménez, P., Sotelo, M. A., Parra, I., Fernandez, D., & Mayoral, S. S. (2008, June). Night time vehicle detection for driving assistance lightbeam controller. In Intelligent Vehicles Symposium, 2008 IEEE (pp. 291–296). IEEE.Google Scholar
  14. 14.
    Alcantarilla, P. F., Oh, S. M., Mariottini, G. L., Bergasa, L. M., & Dellaert, F. (2010, May). Learning visibility of landmarks for vision-based localization. In Robotics and Automation (ICRA), 2010 IEEE International Conference on (pp. 4881–4888). IEEE.Google Scholar
  15. 15.
    Li, Wei, et al. “The application of KAZE features to the classification echocardiogram videos.” Multimodal Retrieval in the Medical Domain. Springer International Publishing, 2015. 61–72.Google Scholar
  16. 16.
    Weickert, Joachim, BM Ter Haar Romeny, and Max A. Viergever. “Efficient and reliable schemes for nonlinear diffusion filtering.” IEEE transactions on image processing 7.3 (1998): 398–410.Google Scholar
  17. 17.
    Weickert, Joachim. “Efficient image segmentation using partial differential equations and morphology.” Pattern Recognition 34.9 (2001): 1813–1824.Google Scholar
  18. 18.
    Qiu, Zhen, Lei Yang, and Weiping Lu. “A New Feature-preserving Nonlinear Anisotropic Diffusion Method for Image Denoising.” BMVC. 2011.Google Scholar
  19. 19.
    Yacoob, Y., and Davis, L., “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, pp. 636–642, June 1996.Google Scholar
  20. 20.
    Chang, C. C., & Lin, C. J. (2011). LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3), 27.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Bachelors of TechnologyMaharaja Surajmal Institute of TechnologyNew DelhiIndia

Personalised recommendations