Skip to main content

Abstract

Human action recognition is one of the trending research topics in the field of computer vision. Human-computer interaction and video monitoring are broad applications that aid in the understanding of human action in a video. The problem with action recognition algorithms such as 3D CNN, Two-stream network, and CNN-LSTM is that they have highly complex models including a lot of parameters resulting in difficulty while training them. Such models require high configuration machines for real-time human action recognition. Therefore, present research proposes the use of 2D skeleton features along with a KNN classifier based HAR system to overcome the aforementioned problems of complexity and response time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Abu-Bakar, S.A.R.: Advances in human action recognition: an updated survey. IET Image Process. 13(13), 2381–2394 (2019). Institution of Engineering and Technology. https://doi.org/10.1049/iet-ipr.2019.0350

  • Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Real-time multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Analysis Mach. Intell. 2018, 1302–1310 (2018). https://doi.org/10.1109/CVPR.2017.143

    Article  Google Scholar 

  • Cippitelli, E., Gasparrini, S., Gambi, E., Spinsante, S.: A human activity recognition system using skeleton data from RGBD sensors. Comput. Intell. Neurosci. 2016, 133−142 (2016).https://doi.org/10.1155/2016/4351435

  • Khan, M.S., Ware, A., Karim, M., Bahoo, N., Khalid, M.J.: Skeleton based human action recognition using a structured-tree neural network. Eur. J. Eng. Res. Sci. 5(8), 849–854 (2020). https://doi.org/10.24018/ejers.2020.5.8.2004

  • Li, W., Wong, Y., Liu, A.-A., Li, Y., Su, Y.-T., Kankanhalli, M.: Multi-camera action dataset for cross-camera action recognition benchmarking. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE (2017)

    Google Scholar 

  • Munaro, M., Ballin, G., Michieletto, S., Menegatti, E.: 3D flow estimation for human action recognition from colored point clouds. Biologically Inspired Cogn. Archit. 5, 42–51 (2013). https://doi.org/10.1016/j.bica.2013.05.008

    Article  Google Scholar 

  • Oreifej, O., Liu, Z.: HON4D: histogram of oriented 4D normals for activity recognition from depth sequences. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 716–723 (2013). https://doi.org/10.1109/CVPR.2013.98

  • Rahmani, H., Mahmood, A., Huynh, D.Q.: HOPC: histogram of oriented principal components of 3D pointclouds for action recognition. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 742–757. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_48

    Chapter  Google Scholar 

  • Sargano, A., Angelov, P., Habib, Z.: A comprehensive review on handcrafted and learning-based action representation approaches for human activity recognition. Appl. Sci. 7(1), 110 (2017). https://doi.org/10.3390/app7010110

    Article  Google Scholar 

  • Slama, R., Wannous, H., Daoudi, M.: Grassmannian representation of motion depth for 3D human gesture and action recognition. In: Proceedings - International Conference on Pattern Recognition, pp. 3499–3504 (2014). https://doi.org/10.1109/ICPR.2014.602

  • Stöttinger, J., Hanbury, A., Sebe, N., Gevers, T.: Sparse color interest points for image retrieval and object categorization. IEEE Trans. Image Process. 21(5), 2681–2692 (2012). https://doi.org/10.1109/TIP.2012.2186143

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, Q., Xu, G., Chen, L., Luo, A., Zhang, S.: Human action recognition based on kinematic similarity in real time. PLoS ONE 12(10), e0185719 (2017). https://doi.org/10.1371/journal.pone.0185719

    Article  Google Scholar 

  • Yang, X., Tian, Y.: Super normal vector for activity recognition using depth sequences. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 804–811 (2014). https://doi.org/10.1109/CVPR.2014.108

  • Zhang, S., Wei, Z., Nie, J., Huang, L., Wang, S., Li, Z.: A review on human activity recognition using vision-based method. J. Healthcare Eng. 2017, 1–31 (2017). https://doi.org/10.1155/2017/3090343

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Najeeb Ur Rehman Malik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Malik, N.U.R., Abu Bakar, S.A.R., Sheikh, U.U. (2022). Multiview Human Action Recognition System Based on OpenPose and KNN Classifier. In: Mahyuddin, N.M., Mat Noor, N.R., Mat Sakim, H.A. (eds) Proceedings of the 11th International Conference on Robotics, Vision, Signal Processing and Power Applications. Lecture Notes in Electrical Engineering, vol 829. Springer, Singapore. https://doi.org/10.1007/978-981-16-8129-5_136

Download citation

Publish with us

Policies and ethics