Skip to main content

Advertisement

Log in

Human action recognition using fusion of multiview and deep features: an application to video surveillance

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Human Action Recognition (HAR) has become one of the most active research area in the domain of artificial intelligence, due to various applications such as video surveillance. The wide range of variations among human actions in daily life makes the recognition process more difficult. In this article, a new fully automated scheme is proposed for Human action recognition by fusion of deep neural network (DNN) and multiview features. The DNN features are initially extracted by employing a pre-trained CNN model name VGG19. Subsequently, multiview features are computed from horizontal and vertical gradients, along with vertical directional features. Afterwards, all features are combined in order to select the best features. The best features are selected by employing three parameters i.e. relative entropy, mutual information, and strong correlation coefficient (SCC). Furthermore, these parameters are used for selection of best subset of features through a higher probability based threshold function. The final selected features are provided to Naive Bayes classifier for final recognition. The proposed scheme is tested on five datasets name HMDB51, UCF Sports, YouTube, IXMAS, and KTH and the achieved accuracy were 93.7%, 98%, 99.4%, 95.2%, and 97%, respectively. Lastly, the proposed method in this article is compared with existing techniques. The resuls shows that the proposed scheme outperforms the state of the art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Ahad MAR, Islam MN, Jahan I (2016) Action recognition based on binary patterns of action-history and histogram of oriented gradient. Journal on Multimodal User Interfaces 10:335–344

    Article  Google Scholar 

  2. Aly S, Sayed A (2019) Human action recognition using bag of global and local Zernike moment features. Multimed Tools Appl:1–31

  3. Arshad H, Khan MA, Sharif M, Yasmin M, Javed MY (2019) Multi-level features fusion and selection for human gait recognition: an optimized framework of Bayesian model and binomial distribution. Int J Mach Learn Cybern:1–18

  4. Aurangzeb K, Haider I, Khan MA, Saba T, Javed K, Iqbal T, Rehman A, Ali H, Sarfraz MS (2019) Human behavior analysis based on multi-types features fusion and Von Nauman entropy based features reduction. Journal of Medical Imaging and Health Informatics 9:662–669

    Article  Google Scholar 

  5. Dai C, Liu X, Lai J (2020) Human action recognition using two-stream attention based LSTM networks. Appl Soft Comput 86:105820

    Article  Google Scholar 

  6. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

    Google Scholar 

  7. F. Iandola, M. Moskewicz, S. Karayev, R. Girshick, T. Darrell, and K. Keutzer, "Densenet: Implementing efficient convnet descriptor pyramids," arXiv preprint arXiv:1404.1869, 2014.

  8. Jalal A, Kamal S, Azurdia-Meza CA (2019) Depth maps-based human segmentation and action recognition using full-body plus body color cues via recognizer engine. Journal of Electrical Engineering & Technology 14:455–461

    Article  Google Scholar 

  9. A. Kamel, B. Sheng, P. Yang, P. Li, R. Shen, and D. D. Feng, "deep convolutional neural networks for human action recognition using depth maps and postures," IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018.

  10. Khan SA (2019) Facial expression recognition in unconstrained environment. Shaheed Zulfikar Ali Bhutto Institute of Sciences & Technology, Karachi

    Google Scholar 

  11. Khan SA, Hussain S, Xiaoming S, Yang S (2018) An effective framework for driver fatigue recognition based on intelligent facial expressions analysis. IEEE Access 6:67459–67468

    Article  Google Scholar 

  12. Khan M, Akram T, Sharif M, Muhammad N, Javed M, Naqvi S (2019) An improved strategy for human action recognition; experiencing a cascaded design. IET Image Process

  13. M. A. Khan, M. I. Lali, M. Sharif, K. Javed, K. Aurangzeb, S. I. Haider, et al., "An optimized method for segmentation and classification of apple diseases based on strong correlation and genetic algorithm based feature selection," IEEE Access, 2019.

    Book  Google Scholar 

  14. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

    Google Scholar 

  15. Kuehne H, Jhuang H, Garrote E, Poggio T, Serre T (2011) HMDB: a large video database for human motion recognition. In: 2011 International Conference on Computer Vision, pp 2556–2563

    Chapter  Google Scholar 

  16. J. Liu, J. Luo, and M. Shah, "Recognizing realistic actions from videos in the wild," 2009.

    Book  Google Scholar 

  17. Liu AA, Su YT, Nie WZ, Kankanhalli M (2016) Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans Pattern Anal Mach Intell 39:102–114

    Article  Google Scholar 

  18. Ma CY, Chen MH, Kira Z, AlRegib G (2019) Ts-lstm and temporal-inception: exploiting spatiotemporal dynamics for activity recognition. Signal Process Image Commun 71:76–87

    Article  Google Scholar 

  19. G. I. Parisi, "Human Action Recognition and Assessment via Deep Neural Network Self-Organization," arXiv preprint arXiv:2001.05837, 2020.

  20. Pham HH, Khoudour L, Crouzil A, Zegers P, Velastin SA (2018) Exploiting deep residual networks for human action recognition from skeletal data. Comput Vis Image Underst 170:51–66

    Article  Google Scholar 

  21. Rahimi S, Aghagolzadeh A, Ezoji M (2019) "human action recognition based on the Grassmann multi-graph embedding," Signal. Image and Video Processing 13:271–279

    Article  Google Scholar 

  22. Rashid M, Khan MA, Sharif M, Raza M, Sarfraz MM, Afza F (2019) Object detection and classification: a joint selection and fusion strategy of deep convolutional neural network and SIFT point features. Multimed Tools Appl 78:15751–15777

    Article  Google Scholar 

  23. Rish I (2001) An empirical study of the naive Bayes classifier. In: IJCAI 2001 workshop on empirical methods in artificial intelligence, pp 41–46

    Google Scholar 

  24. Rodriguez MD, Ahmed J, Shah M (2008) Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition. In: CVPR, p 6

    Google Scholar 

  25. Sharif M, Khan MA, Faisal M, Yasmin M, Fernandes SL (2018) A framework for offline signature verification system: best features selection approach. Pattern Recogn Lett

  26. Sharif A, Khan MA, Javed K, Gulfam H, Iqbal T, Saba T et al (2019) Intelligent human action recognition: a framework of optimal features selection based on Euclidean distance and strong correlation. Journal of Control Engineering and Applied Informatics 21:3–11

    Google Scholar 

  27. Sharif M, Khan MA, Zahid F, Shah JH, Akram T (2019) Human action recognition: a framework of statistical weighted segmentation and rank correlation-based selection. Pattern Anal Applic:1–14

  28. Sharif M, Akram T, Raza M, Saba T, Rehman A (2020) Hand-crafted and deep convolutional neural network features fusion and selection strategy: an application to intelligent human action recognition. Appl Soft Comput 87:105986

    Article  Google Scholar 

  29. Sharif M, Attique M, Tahir MZ, Yasmim M, Saba T, Tanik UJ (2020) A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition. Journal of Organizational and End User Computing (JOEUC) 32:67–92

    Article  Google Scholar 

  30. Siddiqui S, Khan MA, Bashir K, Sharif M, Azam F, Javed MY (2018) Human action recognition: a construction of codebook by discriminative features selection approach. International Journal of Applied Pattern Recognition 5:206–228

    Article  Google Scholar 

  31. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.

  32. K. Soomro, A. R. Zamir, and M. Shah, "UCF101: A dataset of 101 human actions classes from videos in the wild," arXiv preprint arXiv:1212.0402, 2012.

  33. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "inception-v4, inception-resnet and the impact of residual connections on learning," in Thirty-First AAAI Conference on Artificial Intelligence, 2017.

  34. Tu Z, Xie W, Qin Q, Poppe R, Veltkamp RC, Li B et al (2018) Multi-stream CNN: learning representations based on human-related regions for action recognition. Pattern Recogn 79:32–43

    Article  Google Scholar 

  35. Ullah A, Muhammad K, Haq IU, Baik SW (2019) Action recognition using optimized deep autoencoder and CNN for surveillance data streams of non-stationary environments. Futur Gener Comput Syst 96:386–397

    Article  Google Scholar 

  36. Wang P, Liu L, Shen C, Shen HT (2019) Order-aware convolutional pooling for video based action recognition. Pattern Recogn 91:357–365

    Article  Google Scholar 

  37. Weinland D, Ronfard R, Boyer E (2006) Free viewpoint action recognition using motion history volumes. Comput Vis Image Underst 104:249–257

    Article  Google Scholar 

  38. Wu J, Qiu S, Zeng R, Kong Y, Senhadji L, Shu H (2017) Multilinear principal component analysis network for tensor object classification. IEEE Access 5:3322–3331

    Article  Google Scholar 

  39. Yang S, Yang J, Li F, Fan G, Li D (2019) Human Action Recognition Based on Fusion Features. In: The International Conference on Cyber Security Intelligence and Analytics, pp 569–579

    Google Scholar 

  40. Yang H, Yuan C, Li B, Du Y, Xing J, Hu W et al (2019) Asymmetric 3d convolutional neural networks for action recognition. Pattern Recogn 85:1–12

    Article  Google Scholar 

  41. Zare A, Moghaddam HA, Sharifi A (2019) Video spatiotemporal mapping for human action recognition by convolutional neural network. Pattern Anal Applic:1–15

  42. Zhang J, Shum HP, Han J, Shao L (2018) Action recognition from arbitrary views using transferable dictionary learning. IEEE Trans Image Process 27:4709–4723

    Article  MathSciNet  Google Scholar 

  43. Zhang P, Lan C, Xing J, Zeng W, Xue J, Zheng N (2019) View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Trans Pattern Anal Mach Intell

  44. Zhang HB, Zhang YX, Zhong B, Lei Q, Yang L, Du JX et al (2019) A comprehensive survey of vision-based human action recognition methods. Sensors 19:1005

    Article  Google Scholar 

  45. Zhao R, Xu W, Su H, Ji Q (2019) Bayesian Hierarchical Dynamic Model for Human Action Recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7733–7742

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sajid Ali Khan.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khan, M.A., Javed, K., Khan, S.A. et al. Human action recognition using fusion of multiview and deep features: an application to video surveillance. Multimed Tools Appl 83, 14885–14911 (2024). https://doi.org/10.1007/s11042-020-08806-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-08806-9

Keywords

Navigation