Skip to main content

Lateralized Approach for Robustness Against Attacks in Emotion Categorization from Images

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2021)

Abstract

Deep learning has achieved a high classification accuracy on image classification tasks, including emotion categorization. However, deep learning models are highly vulnerable to adversarial attacks. Even a small change, imperceptible to a human (e.g. one-pixel attack), can decrease the classification accuracy of deep models. One reason could be their homogeneous representation of knowledge that considers all pixels in an image to be equally important is easily fooled. Enabling multiple representations of the same object, e.g. at the constituent and holistic viewpoints provides robustness against attacking a single view. This heterogeneity is provided by lateralization in biological systems. Lateral asymmetry of biological intelligence suggests heterogeneous learning of objects. This heterogeneity allows information to be learned at different levels of abstraction, i.e. at the constituent and the holistic level, enabling multiple representations of the same object.

This work aims to create a novel system that can consider heterogeneous features e.g. mouth, eyes, nose, and jaw in a face image for emotion categorization. The experimental results show that the lateralized system successfully considers constituent and holistic features to exhibit robustness to unimportant and irrelevant changes to emotion in an image, demonstrating performance accuracy better than (or similar) to the deep learning system (VGG19). Overall, the novel lateralized method shows a stronger resistance to changes (10.86–47.72% decrease) than the deep model (25.15–83.43% decrease). The advances arise by allowing heterogeneous features, which enable constituent and holistic representations of image components.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The facial landmark is a set of coordinates that cover the whole face.

References

  1. Brave, S., Nass, C.: Emotion in human-computer interaction. Human-comput. Inter. Fundamentals 20094635, 53–68 (2009)

    Article  Google Scholar 

  2. Shehu H.A., Browne W.N., Eisenbarth H.: An adversarial attacks resistance-based approach to emotion recognition from images using facial landmarks. In: 2020 IEEE International Conference on Robot and Human Interactive Communication (2020)

    Google Scholar 

  3. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 8697–8710. https://doi.org/10.1109/CVPR.2018.00907

  4. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis.n 115(3), 211–252 (2015)

    Google Scholar 

  5. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  6. Heaven, D.: Why deep-learning AIs are so easy to fool. Nature 574(7777), 163–166 (2019)

    Article  Google Scholar 

  7. Grimshaw, G.M., Carmel, D.: An asymmetric inhibition model of hemispheric differences in emotional processing. Front. Psychol. 5, 489 (2014)

    Article  Google Scholar 

  8. Siddique, A., Browne, W.N., Grimshaw, G.M.: Lateralized learning for robustness against adversarial attacks in a visual classification system. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, pp. 395–403, June 2020

    Google Scholar 

  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556

  10. Babajee, P., Suddul, G., Armoogum, S., Foogooa, R.: Identifying human emotions from facial expressions with deep learning. In: 2020 Zooming Innovation in Consumer Technologies Conference (ZINC). Novi Sad, Serbia 2020, pp. 36–39 (2020). https://doi.org/10.1109/ZINC50678.2020.9161445

  11. Happy, S.L., Member, S., Routray, A.: Automatic facial expression recognition using features of salient facial patches. IEEE Trans. Affective Comput. 6, 1–12 (2015)

    Article  Google Scholar 

  12. Goodfellow, I.J., et al.: Challenges in Representation Learning: A Report on Three Machine Learning Contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16

    Chapter  Google Scholar 

  13. Pathak, K.M., Yadav, S., Jain, P., Tanwar, P., Kumar, B.: A facial expression recognition system to predict emotions. In: 2020 International Conference on Intelligent Engineering and Management (ICIEM), London, United Kingdom, 2020, pp. 414–419 (2020). https://doi.org/10.1109/ICIEM48762.2020.9160229

  14. Lyons, M.J., Akamatsu, S., Kamachi, M., Gyoba, J., Budynek, J.: The Japanese female facial expression (JAFFE) database. In: Proceedings of Third International Conference on Automatic Face and Gesture Recognition, pp. 14–16, April 1998

    Google Scholar 

  15. Sokolov, D., Patkin, M.: Real-time emotion recognition on mobile devices. In: IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 787–787 (2018). https://doi.org/10.1109/FG.2018.00124

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 770–778 (2015). https://doi.org/10.1109/CVPR.2016.90

  17. Moosavi-Dezfooli, S. M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  18. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893. IEEE, June 2005

    Google Scholar 

  19. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT press, Cambridge (2016)

    MATH  Google Scholar 

  20. Mateen, M., Wen, J., Song, S., Huang, Z.: Fundus image classification using VGG-19 architecture with PCA and SVD. Symmetry 11(1), 1 (2019)

    Article  Google Scholar 

  21. Oloko-Oba, M., Viriri, S.: Pre-trained convolutional neural network for the diagnosis of tuberculosis. In: Bebis, G., et al. (eds.) ISVC 2020. LNCS, vol. 12510, pp. 558–569. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64559-5_44

    Chapter  Google Scholar 

  22. Dam, H.H., Abbass, H.A., Lokan, C., Yao, X.: Neural-based learning classifier systems. IEEE Trans. Knowl. Data Eng. 20(1), 26–39 (2008). https://doi.org/10.1109/TKDE.2007.190671

    Article  MATH  Google Scholar 

  23. Bernadó-Mansilla, E., Garrell-Guiu, J.M.: Accuracy-based learning classifier systems: models, analysis and applications to classification tasks. Evolutionary Comput. 11(3), 209–238 (2003)

    Article  Google Scholar 

  24. Addabbo, M., Longhi, E., Marchis, I.C., Tagliabue, P., Turati, C.: Dynamic facial expressions of emotions are discriminated at birth. PloS one 13(3), e0193868 (2018)

    Article  Google Scholar 

  25. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 1, (2001). https://doi.org/10.1109/cvpr.2001.990517

  26. Dlib Python API Tutorials [Electronic resource] - Access mode. http://dlib.net/python/index.html

  27. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, pp. 94–101 (2010)

    Google Scholar 

  28. Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. Journal of personality and social psychology 17(2), 124 (1971)

    Article  Google Scholar 

  29. Shehu, H.A., Browne, W., Eisenbarth, H.: Emotion categorization from video-frame images using a novel sequential voting technique. In: Bebis, G., et al. (eds.) ISVC 2020. LNCS, vol. 12510, pp. 618–632. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64559-5_49

    Chapter  Google Scholar 

  30. Siddique, A., Iqbal, M., Browne, W.N.: A comprehensive strategy for mammogram image classification using learning classifier systems. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2201–2208. IEEE (2016)

    Google Scholar 

  31. Nguyen, T.B., Browne, W.N., Zhang, M.: Online feature-generation of code fragments for XCS to guide feature construction. In: IEEE Congress on Evolutionary Computation (CEC), pp. 3308–3315 (2019)

    Google Scholar 

  32. Moore, K.L., Dalley, A.F., Agur, A.M.R.: Moore’s clinical anatomy. United States of America: Lippincott Williams & Wilkins. pp. 843–980 (2010). ISBN 978-1-60547-652-0

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harisu Abdullahi Shehu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shehu, H.A., Siddique, A., Browne, W.N., Eisenbarth, H. (2021). Lateralized Approach for Robustness Against Attacks in Emotion Categorization from Images. In: Castillo, P.A., Jiménez Laredo, J.L. (eds) Applications of Evolutionary Computation. EvoApplications 2021. Lecture Notes in Computer Science(), vol 12694. Springer, Cham. https://doi.org/10.1007/978-3-030-72699-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72699-7_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72698-0

  • Online ISBN: 978-3-030-72699-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics