Skip to main content
Log in

HARNet: design and evaluation of a deep genetic algorithm for recognizing yoga postures

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The research presents an interactive toy agent leveraging a deep learning approach for treating autistic kids by making them learn Yoga. The objective of the proposed toy is to understand the basic needs of autistic kids and make them socially adaptable to their surrounding environment. Since kids with autism face social insecurities while interacting and communicating with people, we introduce an interactive toy to accompany the kid, making him or her more likely to act as a companion. The toy is orchestrated with IoT and the Deep Learning framework (HARNet) which makes it interactively instruct Yoga Asana to the autistic kid. The motion of the toy is controlled by touch sensors, and interaction is developed through the recognition of Yogo postures performed by the kid. This paper uses snippets of data in the Yoga-82 dataset. The gestures of Yoga asanas are leveraged, and the same is used for modeling HARNet. Empirical evaluations show that HARNet exhibits an accuracy of 98.52% against the Yoga-82 dataset. The cost of the Toy framework is also compared with state-of-the-art research on Humanoid Toys and the economic range of the proposed framework is evident.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Chaudhari, A., Dalvi, O., Ramade, O., Ambawade, D.: Yog-Guru: real-time yoga pose correction system using deep learning methods. In: 2021 International Conference on Communication Information and Computing Technology (ICCICT), pp. 1–6 (2021). https://doi.org/10.1109/ICCICT50803.2021.9509937

  2. Trejo, E.W., Yuan, P.: Recognition of yoga poses through an interactive system with Kinect device. In: 2018 2nd International Conference on Robotics and Automation Sciences (ICRAS), pp. 1–5 (2018). https://doi.org/10.1109/ICRAS.2018.8443267

  3. Rishan, F., De Silva, B., Alawathugoda, S., Nijabdeen, S., Rupasinghe, L., Liyanapathirana, C.: Infinity yoga tutor: yoga posture detection and correction system. In: 2020 5th International Conference on Information Technology Research (ICITR), pp. 1–6 (2020). https://doi.org/10.1109/ICITR51448.2020.9310832

  4. Jose, J., Shailesh, S.: Yoga Asana identification: a deep learning approach. IOP Conf. Ser. Mater. Sci. Eng. 1110(1), 012002 (2021). https://doi.org/10.1088/1757-899x/1110/1/012002

    Article  Google Scholar 

  5. Verma, M., Kumawat, S., Nakashima, Y., Raman, S.: Yoga-82: a new dataset for fine-grained classification of human poses. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 4472–4479 (2020)

  6. Mohanty, A., Ahmed, A., Goswami, T., Das, A., Vaishnavi, P., Sahay, R.R.: Robust pose recognition using deep learning. In: Raman, B., Kumar, S., Roy, P., Sen, D. (eds.) Proceedings of International Conference on Computer Vision and Image Processing. Advances in Intelligent Systems and Computing, vol. 460. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-2107-7_9

  7. Choudhary, P., Tazi, S.N.: An adaptive system of yogic gesture recognition for human computer interaction. In: 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), pp. 399–402 (2020). https://doi.org/10.1109/ICIIS51140.2020.9342678

  8. Raja Subramanian, R., Vasudevan, V.: A deep genetic algorithm for human activity recognition leveraging fog computing frameworks. J. Vis. Commun. Image Represent. 77, 103132 (2021). https://doi.org/10.1016/j.jvcir.2021.103132

    Article  Google Scholar 

  9. Radhakrishna, S., Nagarathna, R., Nagendra, H.R.: Integrated approach to yoga therapy and autism spectrum disorders. J Ayurveda Integr Med. 1(2), 120–124 (2010). https://doi.org/10.4103/0975-9476.65089

    Article  Google Scholar 

  10. Cano, S., González, C.S., Gil-Iranzo, R.M., Albiol-Pérez, S.: Affective communication for Socially Assistive Robots (SARs) for children with autism spectrum disorder: a systematic review. Sensors 21, 5166 (2021)

    Article  Google Scholar 

  11. Milling, M., Baird, A., Bartl-Pokorny, K.D., Liu, S., Alcorn, A.M., Shen, J., Tavassoli, T., Ainger, E., Pellicano, E., Pantic, M., Cummins, N., Schuller, B.W.: Evaluating the impact of voice activity detection on speech emotion recognition for autistic children. Front. Comput. Sci. 4, 837269 (2022)

    Article  Google Scholar 

  12. Bartl-Pokorny, K.D., et al.: Robot-based intervention for children with autism spectrum disorder: a systematic literature review. IEEE Access 9, 165433–165450 (2021)

    Article  Google Scholar 

  13. Sotoodeh, M.S., Arabameri, E., Panahibakhsh, M., Kheiroddin, F., Mirdoozandeh, H., Ghanizadeh, A.: Effectiveness of yoga training program on the severity of autism. Complement. Ther. Clin. Pract. 28, 47–53 (2017). https://doi.org/10.1016/j.ctcp.2017.05.001

    Article  Google Scholar 

  14. Rubio-Martín, S., García-Ordás, M.T., Bayón-Gutiérrez, M., Prieto-Fernández, N., Benítez-Andrades, J.A.: Early detection of autism spectrum disorder through AI-powered analysis of social media texts. In: 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), L'Aquila, Italy, pp. 235–240 (2023)

  15. Yadav, S.K., Singh, A., Gupta, A., et al.: Real-time Yoga recognition using deep learning. Neural Comput. Appl. 31, 9349–9361 (2019). https://doi.org/10.1007/s00521-019-04232-7

    Article  Google Scholar 

  16. Wang, P., Li, W., Gao, Z., Zhang, J., Tang, C., Ogunbona, P.O.: Action recognition from depth maps using deep convolutional neural networks. IEEE Trans. Hum. Mach. Syst. 46, 498–509 (2016)

    Article  Google Scholar 

  17. Veeriah, V., Zhuang, N., Qi, G.J.: Differential recurrent neural networks for action recognition. In: IEEE International Conference on Computer Vision (ICCV), pp. 4041–4049 (2015)

  18. Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for skeleton based action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1110–1118 (2015)

  19. Wang, P., Li, W., Gao, Z., Tang, C., Zhang, J., Ogunbona, P.: ConvNets-based action recognition from depth maps through virtual cameras and Pseudocoloring. In: Proceedings of the 23rd ACM International Conference on Multimedia, pp. 1119–1122 (2015)

  20. Zhu, W., Lan, C., Xing, J.: Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: ArXiv Preprint, AAAI, 2, Phoenix, Arizona, USA, 8

  21. Li, Y., Li, W., Mahadevan, V., Vasconcelos, N.: Vlad3: encoding dynamics of deep features for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1951–1960 (2016)

  22. AlDahoul, N., Sabri, M., Qalid, A., Mansoor, A.M.: Real-time human detection for aerial captured video sequences via deep models. Comput. Intell. Neurosci. (2018). https://doi.org/10.1155/2018/1639561

    Article  Google Scholar 

  23. Mliki, H., Bouhlel, F., Hammami, M.: Human activity recognition from UAV-captured video sequences. Pattern Recognit. 100, 107140 (2019)

    Article  Google Scholar 

  24. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems (2014)

  25. Ng, J.Y., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4694–4702 (2015)

  26. Ke, Y., Sukthankar, R., Hebert, M.: Efficient visual event detection using volumetric features. In: Tenth IEEE International Conference on Computer Vision, ICCV’05, pp. 166–173 (2005)

  27. Islam, N., Faheem, Y., UdDin, I., Talha, M., Guizani, M., Khalil, M.: A blockchainbased fog computing framework for activity recognition as an application to e-healthcare services. Future Gener. Comput. Syst. 100, 569–578 (2019)

    Article  Google Scholar 

  28. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3d transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  29. Mu, C.-H., Li, C.-Z., Liu, Y., Qu, R., Jiao, L.-C.: Accelerated genetic algorithm based on search-space decomposition for change detection in remote sensing images. Appl. Soft Comput. 84, 105727 (2019)

    Article  Google Scholar 

  30. Pepper, SoftBank Robotics. https://www.softbankrobotics.com/emea/en/index. Accessed 13 July 2022

  31. Leka, smart toys. https://leka.io/en/home/. Accessed 13 July 2022

Download references

Author information

Authors and Affiliations

Authors

Contributions

R. Raja conducted the complete research implementation on activity & posture recognition, built the architecture and validated the model. G. Vishnuvardhanan reviewed the research and examined the test cases.

Corresponding author

Correspondence to R. Raja Subramanian.

Ethics declarations

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 676 kb)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Subramanian, R.R., Govindaraj, V. HARNet: design and evaluation of a deep genetic algorithm for recognizing yoga postures. SIViP (2024). https://doi.org/10.1007/s11760-024-03173-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03173-6

Keywords

Navigation