Skip to main content

Advertisement

Log in

RETRACTED ARTICLE: Action recognition using Correlation of Temporal Difference Frame (CTDF)—an algorithmic approach

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

This article was retracted on 23 May 2022

This article has been updated

Abstract

Presently in most of the real world applications like video surveillance systems, human activities are captured and retained as multimodal information for authorized permitted actions. However the degree of accuracy in recognition of such actions greatly depends on many factors, including occlusion, illumination factor, cluttered environment, and so on. In this work we propose the correlation of temporal difference frame (CTDF) algorithm which captures the local maxima’s of every small movement and its neighboring information. Temporal difference obtained between frames, block size defined to obtain the surround information and finally, the comparison of one to all points between identified frames greatly increase the accuracy. The algorithm takes in the raw video input of the standard UT interaction and BIT interaction datasets. Features extracted using the proposed algorithm is passed through variants of SVM which gives state of art results, 95.83% accuracy for UT Interaction and an accuracy of 90.4% for BIT interaction dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Change history

References

  • Arunnehru J, Geetha MK (2016) Difference intensity distance group pattern for recognizing actions in video using support vector machines. Pattern Recogn Image Anals 26(4):688–696

    Article  Google Scholar 

  • Bhorge SB, Manthalkar RR (2019) Recognition of vision-based activities of daily living using linear predictive coding of histogram of directional derivative. J Ambient Intell Humaniz Comput 10(1):199–214

    Article  Google Scholar 

  • Bobick AF, Davis JW (1997) Action recognition using temporal templates. Motion-based recognition. Springer, Berlin, pp 125–146

    Chapter  Google Scholar 

  • Chaaraoui AA, Climent-Perez P, Florez-Revuelta F (2012) A review on vision techniques applied to human behaviour analysis for ambient-assisted living. Expert Syst Appl 39(12):10873–10888

    Article  Google Scholar 

  • Chen H, Chen J, Hu R, Chen C, Wang Z (2017) Action recognition with temporal scale-invariant deep learning framework. China Commun 14(2):163–172

    Article  Google Scholar 

  • Cho NG, Park SH, Park JS, Park U, Lee SW (2017) Compositional interaction descriptor for human interaction recognition. Neurocomputing 267:169–181

    Article  Google Scholar 

  • Gaur U, Zhu Y, Song B, Roy-Chowdhury A (2011) A “string of feature graphs” model for recognition of complex activities in natural videos. In: 2011 International Conference on Computer Vision. pp. 2595–2602. IEEE

  • Harjanto F, Wang Z, Lu S, Tsoi AC, Feng DD (2016) Investigating the impact of frame rate towards robust human action recognition. Signal Process 124:220–232

    Article  Google Scholar 

  • Harris C, Stephens M (1988) A combined corner and edge detector in alvey vision conference. Manchester, UK

    Google Scholar 

  • Kong Y, Jia Y, Fu Y (2012) Learning human interaction by interactive phrases. European conference on computer vision. Springer, Berlin, pp 300–313

    Google Scholar 

  • Kong Y, Jia Y, Fu Y (2014) Interactive phrases: semantic descriptions for human interaction recognition. IEEE Trans Pattern Anal Mach Intell 36(9):1775–1788

    Article  Google Scholar 

  • Lan T, Wang Y, Yang W, Robinovitch SN, Mori G (2011) Discriminative latent models for recognizing contextual group activities. IEEE Trans Pattern Anal Mach Intell 34(8):1549–1562

    Article  Google Scholar 

  • Laptev I (2005) On space-time interest points. Int J Comput Vision 64(2–3):107–123

    Article  Google Scholar 

  • Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from movies. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1–8. IEEE

  • Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision. vol 2, pp 1150–1157. IEEE

  • Marszalek M, Laptev I, Schmid C (2009) Actions in context. In: IEEE Conference on Computer Vision and Pattern Recognition. pp 2929–2936. IEEE

  • Moeslund TB, Hilton A, Kruger V (2006) A survey of advances in vision-based human motion capture and analysis. Comput Vis Image Underst 104(2–3):90–126

    Article  Google Scholar 

  • Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990

    Article  Google Scholar 

  • Raptis M, Sigal L (2013) Poselet key-framing: a model for human activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp 2650–2657

  • Ryoo MS (2011) Human activity prediction: early recognition of ongoing activities from streaming videos. In: 2011 International Conference on Computer Vision. Pp 1036–1043. IEEE

  • Ryoo MS, Aggarwal JK (2009) Semantic representation and recognition of continued and recursive human activities. Int J Comput Vision 82(1):1–24

    Article  Google Scholar 

  • Ryoo MS, Chen CC, Aggarwal J, Roy-Chowdhury A (2010) An overview of contest on semantic description of human activities (sdha) 2010. In: International Conference on pattern Recognition. Springer, Berlin, pp 270–285

  • Satyamurthi S, Tian J, Chua MCH (2018) Action recognition using multi- directional projected depth motion maps. J Ambient Intell Humaniz Comput 1–7

  • Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local svm approach. In: Proceedings of the 17th International Conference on Pattern Recognition,2004. ICPR 2004. vol. 3, pp 32–36. IEEE

  • Turaga P, Chellappa R, Subrahmanian VS, Udrea O (2008) Machine recognition of human activities: a survey. IEEE Trans Circuits Syst Video Technol 18(11):1473–1488

    Article  Google Scholar 

  • Vahdat A, Gao B, Ranjbar M, Mori G (2011) A discriminative key pose sequence model for recognizing human interactions. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). pp 1729–1736. IEEE

  • Wang H, Yuan C, Shen J, Yang W, Ling H (2018) Action unit detection and key frame selection for human activity prediction. Neuro-computing 318:109–119

    Google Scholar 

  • Yu G, Yuan J, Liu Z (2015) Propagative Hough voting to leverage contextual information. Human action analysis with randomized trees. Springer, Berlin, pp 57–72

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Poonkodi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

This article has been retracted. Please see the retraction notice for more detail:https://doi.org/10.1007/s12652-022-03961-3

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Poonkodi, M., Vadivu, G. RETRACTED ARTICLE: Action recognition using Correlation of Temporal Difference Frame (CTDF)—an algorithmic approach. J Ambient Intell Human Comput 12, 7107–7120 (2021). https://doi.org/10.1007/s12652-020-02378-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-020-02378-0

Keywords

Navigation