Skip to main content

Dyadic Interaction Recognition Using Dynamic Representation and Convolutional Neural Network

  • Conference paper
  • First Online:
  • 882 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1567))

Abstract

Human interaction recognition can be used in video surveillance to recognise human behaviour. The goal of this research is to classify human interaction by converting video snippets into dynamic images and deep CNN architecture for classification. The human interaction input video is snipped into a certain number of smaller segments. For each segment, dynamic Image is constructed that efficiently encodes a video segment into an image with an action silhouette, which plays an important role in interaction recognition. The discriminative features are learned and classified from dynamic image using Convolutional Neural Network. The efficacy of the proposed architecture for interaction recognition is demonstrated by the obtained results on the SBU Kinect Interaction dataset, IXMAS, and TV Human Interaction datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Gao, C., Yang, L., Du, Y., Feng, Z., Liu, J.: From constrained to unconstrained datasets: an evaluation of local action descriptors and fusion strategies for interaction recognition. World Wide Web 19, 265–276 (2016)

    Article  Google Scholar 

  2. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: Computer Vision and Pattern Recognition, (CVPR), pp. 2642–2649 (2013)

    Google Scholar 

  3. Bibi, S., Anjum, N., Sher, M.: Automated multi-feature human interaction recognition in complex environment. Comput. Ind. 99, 282–293 (2018). ISSN 0166-3615, https://doi.org/10.1016/j.compind.2018.03.015

  4. Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)

    Article  Google Scholar 

  5. Simonyan, A.Z.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems, pp. 568–576 (2014)

    Google Scholar 

  6. Tu, Z., et al.: Multistream CNN: learning representations based on human-related regions for action recognition. Pattern Recogn. 79, 32–43 (2018)

    Article  Google Scholar 

  7. Ye, Q., Zhong, H., Qu, C., Zhang, Y.: Human interaction recognition based on whole-individual detection. Sensors 20(8), 2346 (2020). https://doi.org/10.3390/s20082346

  8. Ibrahim, M.S., Muralidharan, S., Deng, Z., Vahdat, A., Mori, G.: A hierarchical deep temporal model for group activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1971–1980 (2016)

    Google Scholar 

  9. Shu, X., Tang, J., Qi, G., Liu, W., Yang, J.: Hierarchical long short-term concurrent memory for human interaction recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2019)

    Google Scholar 

  10. Tang, J., Shu, X., Yan, R., Zhang, L.: Coherence constrained graph lstm for group activity recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2019)

    Google Scholar 

  11. Shu, X., Tang, J., Qi, G.-J., Song, Y., Li, Z., Zhang, L.: Concurrence-aware long short-term sub-memories for person-person action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–8 (2017)

    Google Scholar 

  12. Lee, D.-G., Lee, S.-W.: Prediction of partially observed human activity based on pre-trained deep representation. Pattern Recogn. 85, 198–206 (2019)

    Article  Google Scholar 

  13. Mahmood, M., Jalal, A., Sidduqi, M.: Robust spatio-temporal features for human interaction recognition via artificial neural network. In: International Conference on Frontiers of Information Technology, pp. 218–223. IEEE (2018)

    Google Scholar 

  14. Deng, Z., Vahdat, A., Hu, H., Mori, G.: Structure inference machines: recurrent neural networks for analyzing relations in group activity recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4772–4781 (2016)

    Google Scholar 

  15. Lee, D.-G., & Lee, S.-W.: Human Interaction Recognition Framework based on Interacting Body Part Attention (2021). http://arxiv.org/abs/2101.08967

  16. Fernando, B., Gavves, E., JoseOramas, M., Ghodrati, A., Tuytelaars, T.: Rank pooling for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 773–787 (2017). https://doi.org/10.1109/TPAMI.2016.2558148

    Article  Google Scholar 

  17. Yun, K., Honorio, J., Chattopadhyay, D., Berg, T.L., Samaras, D.: The 2nd International Workshop on Human Activity Understanding from 3D Data at Conference on Computer Vision and Pattern Recognition, CVPR 2012 (2012)

    Google Scholar 

  18. Weinland, D., Ronfard, R., Boyer, E.: Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding (CVIU), vol. 104, no. 2–3 (2006)

    Google Scholar 

  19. Patron-Perez, A., Marszalek, M., Reid, I., Zisserman, A.: Struc-tured learning of human interactions in TV shows. Trans. Pattern Anal. Mach. Intell. 34, 2441–2453 (2012)

    Article  Google Scholar 

  20. Patron-Perez, A., Marszalek, M., Zisserman, A., Reid, I.D.: Highfive: Recognising human interactions in TV shows, in: British MachineVision Conference (BMVC) (2010)

    Google Scholar 

  21. Song, S.; Lan, C.; Xing, J.; Zeng,W., Liu, J.: An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA, 4–9 February 2017

    Google Scholar 

  22. Liu, J., Wang, G., Duan, L., Abdiyeva, K., Kot, A.C.: Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Trans. Image Process. (TIP) 27, 1586–1599 (2018)

    Article  MathSciNet  Google Scholar 

  23. Pham, H.H., Salmane, H., Khoudour, L., Crouzil, A., Velastin, S.A., Zegers, P.: A unified deep framework for joint 3D pose estimation and action recognition from a single RGB camera. Sensors (Switzerland), 20(7) (2020). https://doi.org/10.3390/s20071825

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Newlin Shebiah .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shebiah, R.N., Arivazhagan, S. (2022). Dyadic Interaction Recognition Using Dynamic Representation and Convolutional Neural Network. In: Raman, B., Murala, S., Chowdhury, A., Dhall, A., Goyal, P. (eds) Computer Vision and Image Processing. CVIP 2021. Communications in Computer and Information Science, vol 1567. Springer, Cham. https://doi.org/10.1007/978-3-031-11346-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-11346-8_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-11345-1

  • Online ISBN: 978-3-031-11346-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics