Abstract
Over the past few decades, emotion detection in a real-time environment has been a progressive area of research. This study aims to identify physically challenged individuals and the cognitive gestures of Autism children using Convolutional Neural Network (CNN) based on facial landmarks by creating the emotion detection algorithm. The algorithm is used in real-time by using virtual markers that perform efficiently in the uneven spotlight and head rotation, multiple backgrounds and distinct facial tones. Emotions of faces such as happiness, sorrow, rage, Surprise, Disgust, Fear using virtual markers are collected. Initially, the faces are detected using a cascade classifier. After face image detection, the image is given to the pre-processing stage to remove the noise present in the image. This process is used to increase classification accuracy. Finally, the images are given to the convolution neural network (CNN) classifier to classify the emotions such as happiness, sorrow, rage, Surprise, Disgust and Fear. The performance of the proposed approach is analyzed based on accuracy, precision, recall and F-measure. The proposed Emotion detection using CNN has achieved a cumulative recognition rate of 99.81%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dada, E.G., Bassi, J.S., Chiroma, H., Abdulhamid, S.M., Adetunmbi, A.O., Ajibuwa, O.E.: Machine learning for email spam filtering: review, approaches and open research problems. Heliyon 5(6), e01802 (2019)
Xie, M.: Development of artificial intelligence and effects on financial system. J. Phys. Conf. 1187, 032084 (2019)
Hegazy, O., Soliman, O.S., Salam, M.A.: A machine learning model for stock market prediction. Int. J. Comput. Sci. Telecommun. 4(12), 16ā23 (2014)
Beckmann, J.S., Lew, D.: Reconciling evidence-based medicine and precision medicinein the era of big data: challenges and opportunities. Genome Med. 8(1), 134ā139 (2016)
Weber, G.M., Mandl, K.D., Kohane, I.S.: Finding the missing link for big biomedical data. Jama 311(24), 2479ā2480 (2014)
Loconsole, C., Chiaradia, D., Bevilacqua, V., Frisoli, A.: Real-time emotion recogni- tion: an improved hybrid approach for classification performance. Intell. Comput. Theory 320ā331 (2014)
Huang, X., Kortelainen, J., Zhao, G., Li, X., Moilanen, A., SeppanenĀ ā¬,Ā T., PietikĀ ā¬ ainen, M.: Multi-modal emotion analysis from facial expressions and electro encephalo gram. Comput. Vis. Image Understand 147, 114ā124 (2016). https://doi.org/10.1016/j.cviu.2015.09.015
Raheel, A., Majid, M., Anwar, S.M.: Facial expression recognition based on electroen- cephalography. In: 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur. Pakistan 1ā5 (2019)
Vassilis, S., Herrmann J.: Where do machine learning and human-co mputer interaction meet? (1997)
Keltiner, D., Ekrman, P., Lewis, M., Haviland Jones, J.M. (eds.) Facial Expression of Emotion, Hand Book of Emotions, pp. 236ā49. Gilford Press, New York (2000)
Ekman, P.: Darwin and Facial Expression: A Century of Research in Review, p. 1973. Academic Press Ishk, United State Of America (2006)
Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)
Ekman, P.: Darwin and facial expression: a century of research in review, p. 1973. Academic Press Ishk, United State of America (2006)
Ekman, P., Friesen, W.V., Ancoli, S.: Facial signs of emotional experience. J. Pers. Soc. Psychol. 39, 1123ā1134 (1980)
Ekman, P., Friesen, W.V., Ancoli, S.: Facial signs of emotional experience. J. Pers. Soc. Psychol. 39, 1123ā34 (1980)
Nguyen, B.T., Trinh, M.H., Phan, T.V., Nguyen, H.D.: An efficient real-time emotion detection using camera and facial landmarks. In: 2017 Seventh International Conference on Information Science and Technology (ICIST) (2017). https://doi.org/ 10.1109/icist.2017.7926765
Loconsole, C., Miranda, C.R., Augusto, G., Frisoli, A., Orvalho, V.: Real-time emotion recognition novel method for geometrical facial features extraction. In: Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), pp. 378ā385 (2014)
Palestra, G., Pettinicchio, A., Coco, M.D., Carcagn, P., Leo, M., Distante, C.: Improved performance in facial expression recognition using 32 geometric features. In: Proceedings of the 18th International Conference on Image Analysis and Processing. ICIAP, pp. 518ā528 (2015)
Zhang, J., Yin, Z., Cheng, P., Nichele, S.: Emotion recognition using multi-modal data and machine learning techniques: a tutorial and review. Inf, Fusion (2020)
Patil, P., Kumar, K.S., Gaud, N., Semwal, V.B.: Clinical human gait classification: extreme learning machine approach. In: 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), pp. 1-6. IEEE (2019)
Raj, M., Semwal, V.B., Nandi, G.C.: Bidirectional association of joint angle trajectories for humanoid locomotion: the restricted Boltzmann machine approach. Neural Comput. Appl. 30(6), 1747ā1755 (2018)
Jain, R., Semwal, V.B., Kaushik, P.: Stride segmentation of inertial sensor data using statistical methods for different walking activities. Robotica 1ā14 (2021)
Bijalwan, V., Semwal, V.B., Mandal, T.K.: Fusion of multi-sensor-based biomechanical gait analysis using vision and wearable sensor. IEEE Sens. J. 21(13), 14213ā14220 (2021)
Bijalwan, V., Semwal, V.B., Gupta, V.: Wearable sensor-based pattern mining for human activity recognition: deep learning approach. Ind. Robot.: Int. J. Robot. Res. Appl. (2021)
Dua, N., Singh, S.N., Semwal, V.B.: Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 103(7), 1461ā1478 (2021)
Jain, R., Semwal, V.B., Kaushik, P.: Stride segmentation of inertial sensor data using statistical methods for different walking activities. Robotica 1ā14 (2021)
Semwal, V.B., Gaud, N., Lalwani, P., Bijalwan, V., Alok, A.K.: Pattern identification of different human joints for different human walking styles using inertial measurement unit (IMU) sensor. Artif. Intell. Rev. 1ā21 (2021)
Challa, S.K., Kumar, A., Semwal, V.B.: A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. Vis. Comput. 1ā15 (2021)
Bijalwan, V., Semwal, V.B., Singh, G., Mandal, T.K.: HDL-PSR: modelling spatio-temporal features using hybrid deep learning approach for post-stroke rehabilitation. Neural Process. Lett. 1ā20 (2022)
Semwal, V.B., Gupta, A., Lalwani, P.: An optimized hybrid deep learning model using ensemble learning approach for human walking activities recognition. J. Supercomput. 77(11), 12256ā12279 (2021)
Dua, N., Singh, S.N., Semwal, V.B., Challa, S.K.: Inception inspired CNN-GRU hybrid network for human activity recognition. Multimed. Tools Appl. 1ā35 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Umamaheswari, K.M., Vignesh, M.T. (2023). Face Emotion Detection forĀ Autism Children Using Convolutional Neural Network Algorithms. In: Biswas, A., Semwal, V.B., Singh, D. (eds) Artificial Intelligence for Societal Issues. Intelligent Systems Reference Library, vol 231. Springer, Cham. https://doi.org/10.1007/978-3-031-12419-8_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-12419-8_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-12418-1
Online ISBN: 978-3-031-12419-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)