Skip to main content

Action Recognition Based on Maximum Entropy Fuzzy Clustering Algorithm

  • Conference paper
  • First Online:
Foundations of Intelligent Systems

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 277))

  • 1271 Accesses

Abstract

Considering the problems that initial clustering center selection is very sensitive to clustering results, it is easy to fall into local optimum and only globular clusters can be found for the k-means algorithm in human action recognition. This paper presents a method based on maximum entropy fuzzy clustering and a new interest points detecting method for human action recognition. Firstly, the interest points of the videos are detected by the new approach, and the 3D-SIFT features of the points are extracted, and then, the features of the training videos are clustered generating video words based on maximum entropy fuzzy clustering algorithm to construct the codebook. In the end, the histogram based on the codebook for every video is built, which is used to classify targets in the SVM multi-class classifier. Experimental results show that the proposed method can effectively improve the expression ability of the codebook and the efficiency of the recognition of human action.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Laptev I (2005) On space-time interest points. Int J Comput Vision 64(2):107–123

    Article  MathSciNet  Google Scholar 

  2. Wu D et al (2013) Silhouette analysis-based action recognition via exploiting human poses. IEEE Trans Circ Syst Video Technol 23(2):236–243

    Google Scholar 

  3. Willems G, Tuytelaars T, Luc VG (2008) An efficient dense and scale-invariant spatio-temporal interest point detector. In: 10th European conference on computer vision–ECCV, pp 650–663

    Google Scholar 

  4. Kong Y, Zhang X (2011) Adaptive learning codebook for action recognition. Pattern Recogn Lett 32:1178–1186

    Article  Google Scholar 

  5. Dollár P, Rabaud V, Cottrell G, Belongie S (2005) Behavior recognition via sparse spatio-temporal features. IEEE

    Google Scholar 

  6. Scovanner P, Ali S, Shah M (2007) A 3-dimensional sift descriptor and its application to action recognition. In: ACM multimedia, pp 357–360

    Google Scholar 

  7. Harris C, Stephens M (1988) A combined corner and edge detector. In: Proceedings of the 4th Alvey vision conference, Manchester, pp 147–151

    Google Scholar 

  8. Horn B, Schunck B (1981) Determining optical flow. Artif Intell 17:185–203

    Article  Google Scholar 

  9. Amir HS et al (2012) Evaluation of local spatio-temporal salient feature detectors for human action recognition. In: Ninth conference on computer and robot vision

    Google Scholar 

  10. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: ICPR, pp 32–36

    Google Scholar 

  11. Ghorbani M (2005) Maximum entropy-based fuzzy clustering by using L1-norm space. Turk J Math 29:431–438

    MATH  MathSciNet  Google Scholar 

  12. Huang W, Wu QMJ (2010) Human action recognition based on self organizing map. In: IEEE international conference on digital object identifier, pp 2030–2033

    Google Scholar 

  13. Zhang E, Zhao YQ (2012) A multi-scale conditional random field model for human action recognition. In: CISP, pp 77–81

    Google Scholar 

  14. Wang H, Kläser A, Schmid C, Liu CL (2011) Action recognition by dense trajectories. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 3169–3176

    Google Scholar 

  15. Kovashka A, Grauman K (2010) Learning a hierarchy of discriminative space-time neighborhood features for human action recognition. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 2046–2053

    Google Scholar 

  16. Wang H, Ullah MM, Kläse A, Laptev I, Schmid C (2009) Evaluation of local spatio-temporal features for action recognition. In: Proceedings of British machine vision conference

    Google Scholar 

  17. Guha T, Ward RK (2012) Learning sparse representations for human action recognition. IEEE Trans Pattern Anal Mach Intell 34(8):1576–1588

    Article  Google Scholar 

  18. Yeffet L, Wolf L (2009) Local trinary patterns for human action recognition. In: Proceedings of IEEE international conference on computer vision, pp 492–497

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoqiang Xiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tang, X., Xiao, G. (2014). Action Recognition Based on Maximum Entropy Fuzzy Clustering Algorithm. In: Wen, Z., Li, T. (eds) Foundations of Intelligent Systems. Advances in Intelligent Systems and Computing, vol 277. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-54924-3_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-54924-3_15

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-54923-6

  • Online ISBN: 978-3-642-54924-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics