Skip to main content

Advertisement

Log in

Fuzzy dragon deep belief neural network for activity recognition using hierarchical skeleton features

  • Special Issue
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

In computer vision, human activity recognition is an active research area for different contexts, such as human–computer interaction, healthcare, military applications, and security surveillance. Activity recognition is performed to recognize the goals and actions of one or more people from a sequence of observations based on the actions and the environmental conditions. Still, there are numbers of challenges and issues, which motivate the development of new activity recognition method to enhance the accuracy under more realistic conditions. This paper proposes an error-based fuzzy dragon deep belief network (error-based DDBN), which is the integration of fuzzy with DDBN classifier, to recognize the human activity from a complex and diverse scenario, for which the keyframes are generated based on the Bhattacharya coefficient from the set of frames of the given video. The key frames from the Bhattacharya are extracted using the scale invariant feature transform, color histogram of the spatio-temporal interest dominant points, and hierarchical skeleton. Finally, the features are fed to the classifier, where the classification is done using the proposed error-based fuzzy DDBN to recognize the person. The experimentation is performed using two datasets, namely KTH and Weizmann for analyzing the performance of the proposed classifier. The experimental results reveal that the proposed classifier performs the activity recognition in a better way by obtaining the maximum accuracy of 1, a sensitivity of 0.99, and the specificity of 0.991.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Subetha T, Chitrakala S (2016) A survey on human activity recognition from videos. In: Proceedings on international conference on information communication and embedded system (ICICES)

  2. Onofri L, Soda P, Pechenizkiy M, Iannello G (2016) A survey on using domain and contextual knowledge for human activity recognition in video streams. Expert Syst Appl 63:97–111

    Article  Google Scholar 

  3. Nigam S, Khare A (2016) Recognizing human actions and uniform local binary patterns for human activity recognition in video sequences. Multimed Tools Appl 75(24):17303–17332

    Article  Google Scholar 

  4. Aggarwal J, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv (CSUR) 43(3):16

    Article  Google Scholar 

  5. Gorelick L, Blank M, Shechtman E, Irani M, Basri R (2007) Actions as space–time shapes. Trans Pattern Anal Mach Intell 29(12):2247–2253

    Article  Google Scholar 

  6. Chang S-F (2002) The holy grail of content-based media analysis. Multimed IEEE 9(2):6–10

    Article  Google Scholar 

  7. McKenna T (2003) Video surveillance and human activity recognition for anti-terrorism and force protection. In: Proceedings of the IEEE conference on advanced video and signal based surveillance, Miami, FL, USA

  8. Zouba N, Boulay B, Bremond F, Thonnat M (2008) Monitoring activities of daily living (ADLs) of elderly based on 3D key human postures. In: Caputo B, Vincze M (eds) Cognitive vision. Springer, Berlin, pp 37–50

    Chapter  Google Scholar 

  9. Pentland A (1998) Smart rooms, smart clothes. In: Pattern recognition, proceedings on fourteenth international conference on IEEE, vol 2, pp 949–953

  10. Wang B, Yongli H, Gao J, Sun Y, Yin B (2017) Laplacian LRR on product Grassmann manifolds for human activity clustering in multi-camera video surveillance. IEEE Trans Circuits Syst Video Technol 27(3):554–566

    Article  Google Scholar 

  11. Singh D, Krishna Mohan C (2017) Graph formulation of video activities for abnormal activity recognition. Pattern Recognit 65:265–272

    Article  Google Scholar 

  12. Mo S, Niu J, Yiming S, Das SK (2018) A novel feature set for video emotion recognition. Neurocomputing 291:11–20

    Article  Google Scholar 

  13. Wang X, Gao L, Song J, Zhen X, Sebe N, Shen HT (2018) Deep appearance and motion learning for egocentric activity recognition. Neurocomputing 275:438–447

    Article  Google Scholar 

  14. Jalal A, Kim Y-H, Kim Y-J, Kamal S, Kim D (2017) Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognit 61:295–308

    Article  Google Scholar 

  15. Saleh A, Abdel-Nasser M, Garcia MA, Puiga D (2018) Aggregating the temporal coherent descriptors in videos using multiple learning kernel for action recognition. Pattern Recognit Lett 105:4–12

    Article  Google Scholar 

  16. Sajjad Hossain HM, Abdullah Al Hafiz Khan M, Roy N (2017) Active learning enabled activity recognition. Pervasive Mobile Comput 38(2):312–330

    Article  Google Scholar 

  17. Ullah J, Arfan Jaffar M (2018) Object and motion cues based collaborative approach for human activity localization and recognition in unconstrained videos. Clust Comput 21(1):311–322

    Article  Google Scholar 

  18. Tao D, Jin L, Yuan Y, Xue Y (2016) Ensemble manifold rank preserving for acceleration-based human activity recognition. IEEE Trans Neural Netw Learn Syst 27(6):1392–1404

    Article  MathSciNet  Google Scholar 

  19. Hsu Y-L, Yang S-C, Chang H-C, Lai H-C (2018) Human daily and sport activity recognition using a wearable inertial sensor network. IEEE Access 6:31715–31728

    Article  Google Scholar 

  20. YunisToruna and GülayTohumoğlu (2011) Designing simulated annealing and subtractive clustering based fuzzy classifier. Appl Soft Comput 11(2):2193–2201

    Article  Google Scholar 

  21. Demirli K, Cheng SX, Muthukumaran P (2003) Subtractive clustering based modelling of job sequencing with parametric search. Fuzzy Sets Syst 137:235–270

    Article  Google Scholar 

  22. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  23. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition, vol 3, pp 32–36

  24. Yang C, Tiebe O, Shirahama K, Grzegorzek M (2016) Object matching with hierarchical skeletons. Pattern Recognit 55:183–197

    Article  Google Scholar 

  25. Sudhakar R, Letitia S (2017) ASABSA: adaptive shape assisted block search algorithm and fuzzy holoentropy-enabled cost function for motion vector computation. Wirel Pers Commun 94(3):1663–1684

    Article  Google Scholar 

  26. Ján Vojt Bc (2016) Deep neural networks and their implementation. Department of Theoretical Computer Science and Mathematical Logic, Prague

  27. Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073

    Article  MathSciNet  Google Scholar 

  28. Action Database. http://www.nada.kth.se/cvap/actions/. Accessed on July 2018

  29. Alpert S, Galun M, Basri R, Brandt A (2012) Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE Trans Pattern Anal Mach Intell 34(2):315–327

    Article  Google Scholar 

  30. McCaffrey JD (2016) Deep neural network implementation. Software research, development, testing, and education, 25 Nov 2016

  31. Zhou R, Dasheng W, Fang L, Aijun X (2018) A Levenberg–Marquardt backpropagation neural network for predicting forest growing stock based on the least-squares equation fitting parameters. Forests 9(12):757

    Article  Google Scholar 

  32. Kumari S, Mitra SK (2011) Human action recognition using DFT. In: Third national conference on computer vision, pattern recognition, image processing and graphics, pp 239–242

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul T. Sheeba.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sheeba, P.T., Murugan, S. Fuzzy dragon deep belief neural network for activity recognition using hierarchical skeleton features. Evol. Intel. 15, 907–924 (2022). https://doi.org/10.1007/s12065-019-00245-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-019-00245-2

Keywords

Navigation