Skip to main content
Log in

Implementation of an improved facial emotion retrieval method in multimedia system

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

The interaction between machine and human with the development of information technology is growing, and therefore human friendly system is more increasing in actual circumstances. The most important thing in communication between human and machine is the understanding of each other’s thought and the knowing of each other’s emotion. This paper proposes a new method of grasping human emotion with the emotion recognition through the human’s facial image. This new approach consists of two categories, which include the combination of principal component analysis, linear discriminant analysis for pattern recognition problem, and support vector machine for the emotion retrieval from the images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdiscip Rev Comput Stat 2(4):433–459. https://doi.org/10.1002/wics.101

    Article  Google Scholar 

  2. Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models–their training and application. Comput Vis Image Underst 61(1):38–59. https://doi.org/10.1006/cviu.1995.1004

    Article  Google Scholar 

  3. Ekman P, Friesen W, Ellsworth P, Goldstein A, Krasner L (1972) Emotion in human face Pergamon General Psychology Series, Elsevier Inc. ISBN: 978-0-08-016643-8. http://www.sciencedirect.com/science/book/9780080166438

  4. Hwang I, Ryoo S, Chang JK (2014) Facial emotion recognition and image transform by the emotion. J Korean Soc Comput Game 27(3):113–118. http://www.dbpia.co.kr/Article/NODE02481716

    Google Scholar 

  5. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I (2010) The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer society conference on computer vision and pattern recognition - workshops, pp 94–101. https://doi.org/10.1109/CVPRW.2010.5543262

  6. Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with gabor wavelets. In: Proceedings of the 3rd. international conference on face & gesture recognition, FG ’98. http://dl.acm.org/citation.cfm?id=520809.796143. IEEE Computer Society, Washington, DC, pp 200–205

  7. Yang J, Zhang D, Frangi AF, Yang JY (2004) Two-dimensional pca: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Anal Mach Intell 26(1):131–137. https://doi.org/10.1109/TPAMI.2004.1261097

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Hanshin University Research Grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to JaeKhun Chang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, J., Ryoo, S. Implementation of an improved facial emotion retrieval method in multimedia system. Multimed Tools Appl 77, 5059–5065 (2018). https://doi.org/10.1007/s11042-017-5241-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5241-5

Keywords

Navigation