Skip to main content
Log in

Human action recognition: a framework of statistical weighted segmentation and rank correlation-based selection

  • Theoretical advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Human action recognition from a video sequence has received much attention lately in the field of computer vision due to its range of applications in surveillance, healthcare, smart homes, tele-immersion, to name but a few. However, it is still facing several challenges such as human variations, occlusion, change in illumination, complex background. In this article, we consider the problems related to multiple human detection and classification using novel statistical weighted segmentation and rank correlation-based feature selection approach. Initially, preprocessing is performed on a set of frames to remove existing noise and to make the foreground maximal differentiable compared to the background. A novel weighted segmentation method is also introduced for human extraction prior to feature extraction. Ternary features are exploited including color, shape, and texture, which are later combined using serial-based features fusion method. To avoid redundancy, rank correlation-based feature selection technique is employed, which acts as a feature optimizer and leads to improved classification accuracy. The proposed method is validated on six datasets including Weizmann, KTH, Muhavi, WVU, UCF sports, and MSR action and validated based on seven performance measures. A fair comparison with existing work is also provided which proves the significance of proposed compared to other techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Khan MA, Akram T, Sharif M, Javed MY, Muhammad N, Yasmin M (2018) An implementation of optimized framework for action classification using multilayers neural network on selected fused features. Pattern Anal Appl 1–21. https://doi.org/10.1007/s10044-018-0688-1

    Article  MathSciNet  Google Scholar 

  2. Sharif M, Khan MA, Akram T, Javed MY, Saba T, Rehman A (2017) A framework of human detection and action recognition based on uniform segmentation and combination of Euclidean distance and joint entropy-based features selection. EURASIP J Image Video Process 2017:89

    Article  Google Scholar 

  3. Sharif M, Khan MA, Faisal M, Yasmin M, Fernandes SL (2018) A framework for offline signature verification system: best features selection approach. Pattern Recognit Lett. https://doi.org/10.1016/j.patrec.2018.01.021

    Article  Google Scholar 

  4. Khan MA, Sharif M, Javed MY, Akram T, Yasmin M, Saba T (2017) License number plate recognition system using entropy-based features selection approach with SVM. IET Image Process 12(2):200–209

    Article  Google Scholar 

  5. Ogale NA (2006) A survey of techniques for human detection from video. Surv Univ Md 125:19

    Google Scholar 

  6. Nguyen DT, Li W, Ogunbona PO (2016) Human detection from images and videos: a survey. Pattern Recogn 51:148–175

    Article  Google Scholar 

  7. Mishra MSK, Jtmcoe F, Bhagat K (2015) A survey on human motion detection and surveillance. Int J Adv Res Electron Commun Eng 4(4):1044–1048

    Google Scholar 

  8. Zhu F, Shao L, Xie J, Fang Y (2016) From handcrafted to learned representations for human action recognition: a survey. Image Vis Comput 55:42–52

    Article  Google Scholar 

  9. Dawn DD, Shaikh SH (2016) A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Vis Comput 32:289–306

    Article  Google Scholar 

  10. Dhulekar P, Gandhe S, Chitte H, Pardeshi K, (2017) Human action recognition: an overview. In: Proceedings of the international conference on data engineering and communication technology, pp 481–488

  11. Carmona JM, Climent J (2018) Human action recognition by means of subtensor projections and dense trajectories. Pattern Recogn 81:443–455

    Article  Google Scholar 

  12. Ai S, Lu T, Xiong Y (2018) Improved dense trajectories for action recognition based on random projection and Fisher vectors. In: MIPPR 2017: pattern recognition and computer vision, p 1060915

  13. Ming Y, Wang G, Hong X (2017) Spatial-temporal texture features for 3D human activity recognition using laser-based RGB-D videos. KSII Trans Internet Inf Syst 11(3):1595–1613

    Google Scholar 

  14. Zhang B, Yang Y, Chen C, Yang L, Han J, Shao L (2017) Action recognition using 3D histograms of texture and a multi-class boosting classifier. IEEE Trans Image Process 26:4648–4660

    Article  MathSciNet  Google Scholar 

  15. Jaouedi N, Boujnah N, Htiwich O, Bouhlel MS (2017) Human action recognition to human behavior analysis. In: 2017 International conference on information and digital technologies (IDT), pp 263–266

  16. Yi Y, Wang H, Zhang B (2017) Learning correlations for human action recognition in videos. Multimed Tools Appl 76(18):18891–18913

    Article  Google Scholar 

  17. Ji X, Zhou L, Qin N, Li Y (2016) A simple and fast action recognition method based on adaboost algorithm. Int J Multimed Ubiq Eng 11:225–236

    Article  Google Scholar 

  18. Akilandasowmya G, Sathiya P, AnandhaKumar P (2015) Human action analysis using K-NN classifier. In: 2015 Seventh international conference on advanced computing (ICoAC), pp 1–7

  19. Kamal S, Jalal A, Kim D (2016) Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM. J Electr Eng Technol 11:1857–1862

    Article  Google Scholar 

  20. Uddin M, Kim J (2016) Human activity recognition using spatiotemporal 3-D body joint features with hidden Markov models. KSII Trans Internet Inf Syst 10(6):2767–2780

    Google Scholar 

  21. Xiao Q, Song R (2018) Action recognition based on hierarchical dynamic Bayesian network. Multimed Tools Appl 77(6):6955–6968

    Article  Google Scholar 

  22. Nasiri JA, Charkari NM, Mozafari K (2014) Energy-based model of least squares twin support vector machines for human action recognition. Sig Process 104:248–257

    Article  Google Scholar 

  23. Tong M, Tian W, Wang H, Wang F (2018) A compact discriminant hierarchical clustering approach for action recognition. Multimed Tools Appl 77(6):7539–7564

    Article  Google Scholar 

  24. Moussa MM, Hamayed E, Fayek MB, El Nemr HA (2015) An enhanced method for human action recognition. J Adv Res 6:163–169

    Article  Google Scholar 

  25. Hashemi SM, Rahmati M (2016) View-independent action recognition: a hybrid approach. Multimed Tools Appl 75:6755–6775

    Article  Google Scholar 

  26. Azary S (2014) Grassmann learning for recognition and classification. Rochester Institute of Technology

  27. Chaaraoui AA, Climent-Pérez P, Flórez-Revuelta F (2013) Silhouette-based human action recognition using sequences of key poses. Pattern Recogn Lett 34:1799–1807

    Article  Google Scholar 

  28. Iosifidis A, Tefas A, Pitas I (2012) View-invariant action recognition based on artificial neural networks. IEEE Trans Neural Netw Learn Syst 23:412–424

    Article  Google Scholar 

  29. Horn BK, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203

    Article  Google Scholar 

  30. Guo Z, Zhang L, Zhang D (2010) A completed modeling of local binary pattern operator for texture classification. IEEE Trans Image Process 19:1657–1663

    Article  MathSciNet  Google Scholar 

  31. Usha R, Perumal K (2014) Content based image retrieval using combined features of color and texture features with SVM classification. Int J Comput Sci Commun Netw 4:169–174

    Google Scholar 

  32. Hechenbichler K, Schliep K (2004) Weighted k-nearest-neighbor techniques and ordinal classification

  33. Bregonzio M, Gong S, Xiang T (2009) Recognising action as clouds of space-time interest points. In: IEEE conference on computer vision and pattern recognition, 2009 (CVPR 2009), pp 1948–1955

  34. Kulathumani V, Kavi R, Ramagiri S (2011) WVU multi-view action recognition dataset

  35. Kumar SS, John M (2016) Human activity recognition using optical flow based feature set. In: 2016 IEEE international Carnahan conference on security technology (ICCST), pp 1–5

  36. Azary S (2014) Grassmann learning for recognition and classification

  37. Mahadeo NK, Papliński AP, Ray S (2012) Model-based pupil and iris localization. In: The 2012 International joint conference on neural networks (IJCNN), pp 1–7

  38. Kushwaha AKS, Srivastava S, Srivastava R (2017) Multi-view human activity recognition based on silhouette and uniform rotation invariant local binary patterns. Multimed Syst 23(4):451–467

    Article  Google Scholar 

  39. Iosifidis A, Tefas A, Pitas I (2013) Multi-view human action recognition: a survey. In: 2013 Ninth international conference on intelligent information hiding and multimedia signal processing, pp 522–525

  40. Singh S, Velastin SA, Ragheb H (2010) Muhavi: a multicamera human action video dataset for the evaluation of action recognition methods. In: 2010 Seventh IEEE international conference on advanced video and signal based surveillance (AVSS), pp 48–55

  41. Cai J, Tang X, Zhang L, Feng G (2016) Learning zeroth class dictionary for human action recognition. In: 2016 IEEE international conference on image processing (ICIP), pp 4175–4179

  42. Maity S, Bhattacharjee D, Chakrabarti A (2017) A novel approach for human action recognition from silhouette images. IETE J Res 63(2):160–171

    Article  Google Scholar 

  43. Wu X, Jia Y (2012) View-invariant action recognition using latent kernelized structural SVM. In: European conference on computer vision, pp 411–424

    Chapter  Google Scholar 

  44. Eweiwi A, Cheema S, Thurau C, Bauckhage C (2011) Temporal key poses for human action recognition. In: 2011 IEEE international conference on computer vision workshops (ICCV Workshops), pp 1310–1317

  45. Rodriguez MD, Ahmed J, Shah M (2008) Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE conference on computer vision and pattern recognition, 2008 (CVPR 2008), pp 1–8

  46. Avgerinakis K, Briassouli A, Kompatsiaris Y (2016) Activity detection using sequential statistical boundary detection (SSBD). Comput Vis Image Underst 144:46–61

    Article  Google Scholar 

  47. Elshourbagy M, Hemayed E, Fayek M (2016) Enhanced bag of words using multilevel k-means for human activity recognition. Egypt Inform J 17:227–237

    Article  Google Scholar 

  48. Zhen X, Shao L, Li X (2014) Action recognition by spatio-temporal oriented energies. Inf Sci 281:295–309

    Article  Google Scholar 

  49. Iosifidis A, Tefas A, Pitas I (2014) Discriminant bag of words based representation for human action recognition. Pattern Recogn Lett 49:185–192

    Article  Google Scholar 

  50. Wang L, Qiao Y, Tang X (2014) Latent hierarchical model of temporal structure for complex activity classification. IEEE Trans Image Process 23:810–822

    Article  MathSciNet  Google Scholar 

  51. Somasundaram G, Cherian A, Morellas V, Papanikolopoulos N (2014) Action recognition using global spatio-temporal features derived from sparse representations. Comput Vis Image Underst 123:1–13

    Article  Google Scholar 

  52. Soomro K, Zamir AR, Shah M (2012) UCF101: a dataset of 101 human actions classes from videos in the wild, arXiv preprint arXiv:1212.0402

  53. Eltoukhy MM, Elhoseny M, Hosny KM, Singh AK (2018) Computer aided detection of mammographic mass using exact Gaussian–Hermite moments. J Ambient Intell Humaniz Comput 1–9. https://doi.org/10.1007/s12652-018-0905-1

  54. Tu Z, Xie W, Qin Q, Poppe R, Veltkamp RC, Li B et al (2018) Multi-stream CNN: learning representations based on human-related regions for action recognition. Pattern Recogn 79:32–43

    Article  Google Scholar 

  55. Li J, Liu L, Zhou M, Yang J-J, Chen S, Liu H et al (2018) Feature selection and prediction of small-for-gestational-age infants. J Ambient Intell Humaniz Comput 1–15. https://doi.org/10.1007/s12652-018-0892-2

  56. Li D, Qiu Z, Dai Q, Yao T, Mei T (2018) Recurrent tubelet proposal and recognition networks for action detection. In: Proceedings of the European conference on computer vision (ECCV), pp 303–318

    Chapter  Google Scholar 

Download references

Acknowledgements

The authors would like to thank HEC Startup Research Grant Program (SRGP) Pakistan (Project# 1307).

Author information

Authors and Affiliations

Authors

Contributions

MS generated this idea and developed a classification design and also identified the sole application. MAK performed the simulations by developing different patches of code with full integration. He is also responsible for this complete write-up. Different accuracy criteria are finalized and also simulated by this author. MF and JHS have given a complete shape to this article and identified several issues and helped the primary authors to overcome all those shortcomings. TS is responsible for the final proofreading along with the technical support in the classification step due to her research major. AR provided technical support in different sections which include feature extraction and fusion along with the issues raised in the development of selection approach. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Muhammad Attique Khan.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interest.

Availability of data and material

Six publicly available datasets are used in this research for validation of proposed method including Weizmann, KTH, Muhavi, WVU, UCF sports, and MSR action.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharif, M., Khan, M.A., Zahid, F. et al. Human action recognition: a framework of statistical weighted segmentation and rank correlation-based selection. Pattern Anal Applic 23, 281–294 (2020). https://doi.org/10.1007/s10044-019-00789-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-019-00789-0

Keywords

Navigation