Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 5, pp 5919–5939 | Cite as

Fusing depth and colour information for human action recognition

  • Danilo AvolaEmail author
  • Marco Bernardi
  • Gian Luca Foresti
Article
  • 104 Downloads

Abstract

In recent years, human action recognition systems have been increasingly developed to support a wide range of application areas, such as surveillance, behaviour analysis, security, and many others. In particular, data fusion approaches that use depth and colour information (i.e., RGB-D data) seem to be particularly promising for recognizing large classes of human actions with a high level of accuracy. Anyway, existing data fusion approaches are mainly based on feature fusion strategies, which tend to suffer of some limitations, including the difficult of combining different feature types and the management of missing information. To address the two problems just reported, we propose an RGB-D data based human action recognition system supported by a decision fusion strategy. The system, starting from the well-known Joint Directors of Laboratories (JDL) data fusion model, analyses human actions separately for each channel (i.e., depth and colour). The actions are modelled as a sum of visual words by using the traditional Bag-of-Visual-Words (BoVW) model. Subsequently, on each channel, these actions are classified by using a multi-class Support Vector Machine (SVM) classifier. Finally, the classification results are fused by a Naive Bayes Combination (NBC) method. The effectiveness of the proposed system has been proven on the basis of three public datasets: UTKinect-Action3D, CAD-60, and LIRIS Human Activities. Experimental results, compared with key works of the current state-of-the-art, have shown that what we propose can be considered a concrete contribute to the action recognition field.

Keywords

Human action recognition Decision level fusion Bag-of-visual-word Naive bayes combination Support vector machine RGB-D 

Notes

Acknowledgments

This work was supported in part by the MIUR under grant “Departments of Excellence 2018-2022” of the Department of Computer Science of Sapienza University.

References

  1. 1.
    Aggarwal J, Ryoo M (2011) Human activity analysis: a review. ACM Comput Surv 43(3):16, 1–16, 43CrossRefGoogle Scholar
  2. 2.
    Aggarwal J, Xia L (2014) Human activity recognition from 3D data: a review. Pattern Recogn Lett 48:70–80CrossRefGoogle Scholar
  3. 3.
    Avola D, Cinque L, Levialdi S, Placidi G (2013) Human body language analysis: a preliminary study based on kinect skeleton tracking. In: Proceedings of the international conference on image analysis and processing (ICIAP), pp 465–473CrossRefGoogle Scholar
  4. 4.
    Avola D, Bernardi M, Cinque L, Foresti GL, Massaroni C (2018a) Combining keypoint clustering and neural background subtraction for real-time moving object detection by PTZ cameras. In: Proceedings of the international conference on pattern recognition applications and methods (ICPRAM), pp 638–645Google Scholar
  5. 5.
    Avola D, Bernardi M, Cinque L, Foresti GL, Massaroni C (2018b) Exploiting recurrent neural networks and leap motion controller for the recognition of sign language and semaphoric hand gestures. IEEE Transactions on Multimedia, pp P–P (in press)Google Scholar
  6. 6.
    Avola D, Cinque L, Foresti G, Martinel N, Pannone D, Piciarelli C (2018c) Low-level feature detectors and descriptors for smart image and video analysis: a comparative study. In: Bridging the semantic gap in image and video analysis, pp 7–29Google Scholar
  7. 7.
    Avola D, Cinque L, Foresti GL, Marini MR, Pannone D (2018d) VRheab: a fully immersive motor rehabilitation system based on recurrent neural network. Multimedia Tools and Applications 77(19):24, 955–24, 982CrossRefGoogle Scholar
  8. 8.
    Ballan L, Bertini M, Del Bimbo A, Serra G (2010) Video event classification using string kernels. Multimedia Tools and Applications 48(1):69–87CrossRefGoogle Scholar
  9. 9.
    Bay H, Ess A, Tuytelaars T, Gool LV (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110(3):346–359CrossRefGoogle Scholar
  10. 10.
    Benmokhtar R (2014) Robust human action recognition scheme based on high-level feature fusion. Multimedia Tools and Applications 69(2):253–275CrossRefGoogle Scholar
  11. 11.
    Canal G, Escalera S, Angulo C (2016) A real-time human-robot interaction system based on gestures for assistive scenarios. Comput Vis Image Underst 149(C):65–77CrossRefGoogle Scholar
  12. 12.
    Chang CC, Lin CJ (2011) LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27CrossRefGoogle Scholar
  13. 13.
    Chathuramali KGM, Rodrigo R (2012) Faster human activity recognition with SVM. In: Proceedings of the international conference on advances in ICT for emerging regions (ICTer), pp 197–203Google Scholar
  14. 14.
    Cámara-Chávez G, de Albuquerque Araújo A (2009) Harris-SIFT descriptor for video event detection based on a machine learning approach. In: Proceedings of the IEEE international symposium on multimedia (ISM), pp 153–158Google Scholar
  15. 15.
    Correa NM, Adali T, Li YO, Calhoun VD (2010) Canonical correlation analysis for data fusion and group inferences. IEEE Signal Proc Mag 27(4):39–50CrossRefGoogle Scholar
  16. 16.
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297zbMATHGoogle Scholar
  17. 17.
    Das S, Koperski M, Bremond F, Francesca G (2017) Action recognition based on a mixture of RGB and depth based skeleton. In: Proceedings of the IEEE international conference on advanced video and signal based surveillance (AVSS), pp 1–6Google Scholar
  18. 18.
    Duta IC, Uijlings JRR, Ionescu B, Aizawa K, Hauptmann AG, Sebe N (2017) Efficient human action recognition using histograms of motion gradients and VLAD with descriptor shape information. Multimedia Tools and Applications 76(21):22, 445-22, 472CrossRefGoogle Scholar
  19. 19.
    Eigen D, Fergus R (2015) Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 2650–2658Google Scholar
  20. 20.
    Fei-Fei L, Perona P (2005) A Bayesian hierarchical model for learning natural scene categories. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), vol 2, pp 524–531Google Scholar
  21. 21.
    Foggia P, Percannella G, Saggese A, Vento M (2013) Recognizing human actions by a bag of visual words. In: Proceedings of the IEEE international conference on systems, man, and cybernetics (SMC), pp 2910–2915Google Scholar
  22. 22.
    Gao Y, Xiang X, Xiong N, Huang B, Lee HJ, Alrifai R, Jiang X, Fang Z (2018) Human action monitoring for healthcare based on deep learning. IEEE Access 6:52, 277–52, 285CrossRefGoogle Scholar
  23. 23.
    Garg R, BG VK, Carneiro G, Reid I (2016) Unsupervised CNN for single view depth estimation: Geometry to the rescue. In: Proceedings of the european conference on computer vision (ECCV), pp 740–756CrossRefGoogle Scholar
  24. 24.
    Gunatilaka AH, Baertlein BA (2001) Feature-level and decision-level fusion of noncoincidently sampled sensors for land mine detection. IEEE Trans Pattern Anal Mach Intell 23(6):577–589CrossRefGoogle Scholar
  25. 25.
    Gupta K, Bhavsar A (2016) Scale invariant human action detection from depth cameras using class templates. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW), pp 38–45Google Scholar
  26. 26.
    Hall DL, Llinas J (1997) An introduction to multisensor data fusion. Proc IEEE 85(1):6–23CrossRefGoogle Scholar
  27. 27.
    Hartigan JA, Wong MA (1979) Algorithm AS 136: a k-means clustering algorithm. J R Stat Soc Ser C Appl Stat 28(1):100–108zbMATHGoogle Scholar
  28. 28.
    He C, Shao J, Sun J (2018) An anomaly-introduced learning method for abnormal event detection. Multimedia Tools and Applications 77(22):29, 573–29, 588CrossRefGoogle Scholar
  29. 29.
    He X, Cai D, Niyogi P (2006) Tensor subspace analysis. In: Advances in neural information processing systems, pp 499–506Google Scholar
  30. 30.
    Hu J, Zheng W, Lai J, Zhang J (2017) Jointly learning heterogeneous features for RGB-d activity recognition. IEEE Trans Pattern Anal Mach Intell 39(11):2186–2200CrossRefGoogle Scholar
  31. 31.
    Ijjina EP, Chalavadi KM (2017) Human action recognition in RGB-d videos using motion sequence information and deep learning. Pattern Recogn 72:504–516CrossRefGoogle Scholar
  32. 32.
    Jia C, Fu Y (2016) Low-rank tensor subspace learning for RGB-d action recognition. IEEE Trans Image Process 25(10):4641–4652MathSciNetCrossRefGoogle Scholar
  33. 33.
    Jia C, Kong Y, Ding Z, Fu YR (2014a) Latent tensor transfer learning for RGB-D action recognition. In: Proceedings of the ACM international conference on multimedia (MM), pp 87–96Google Scholar
  34. 34.
    Jia C, Zhong G, Fu Y (2014b) Low-rank tensor learning with discriminant analysis for action classification and image recovery. In: Proceedings of the AAAI conference on artificial intelligence (CAI), pp 1228–1234Google Scholar
  35. 35.
    Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE international conference on computer vision and pattern recognition (CVPR), pp 1725–1732Google Scholar
  36. 36.
    Khaire P, Kumar P, Imran J (2018) Combining cnn streams of RGB-D and skeletal data for human activity recognition. Pattern Recognition Letters pp P–P (in press)Google Scholar
  37. 37.
    Khaleghi B, Khamis A, Karray FO, Razavi SN (2013) Multisensor data fusion: a review of the state-of-the-art. Information Fusion 14(1):28–44CrossRefGoogle Scholar
  38. 38.
    Kim TY, Ko H (2005) Bayesian fusion of confidence measures for speech recognition. IEEE Signal Process Lett 12(12):871–874CrossRefGoogle Scholar
  39. 39.
    Klein LA (2004) Sensor and data fusion: a tool for information assessment and decision making. SPIE Press, BellinghamCrossRefGoogle Scholar
  40. 40.
    Koperski M, Bremond F (2016) Modeling spatial layout of features for real world scenario RGB-D action recognition. In: Proceedings of the IEEE international conference on advanced video and signal based surveillance (AVSS), pp 44–50Google Scholar
  41. 41.
    Koperski M, Bilinski P, Bremond F (2014) 3D trajectories for action recognition. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 4176–4180Google Scholar
  42. 42.
    Koppula HS, Gupta R, Saxena A (2013) Learning human activities and object affordances from RGB-d videos. Int J Robot Res 32(8):951–970CrossRefGoogle Scholar
  43. 43.
    Kosmopoulos DI, Doliotis P, Athitsos V, Maglogiannis I (2013) Fusion of color and depth video for human behavior recognition in an assistive environment. In: Proceedings of the internation conference on distributed, ambient, and pervasive interactions (DAPI), pp 42–51CrossRefGoogle Scholar
  44. 44.
    Kumar P, Mittal A, Kumar P (2006) Fusion of thermal infrared and visible spectrum video for robust surveillance. In: Proceedings of the indian conference on computer vision, graphics and image processing (ICVGIP), pp 528–539CrossRefGoogle Scholar
  45. 45.
    Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, New YorkCrossRefGoogle Scholar
  46. 46.
    Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from movies. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8Google Scholar
  47. 47.
    Liu AA, Nie WZ, Su YT, Ma L, Hao T, Yang ZX (2015) Coupled hidden conditional random fields for RGB-d human action recognition. Signal Process 112:74–82CrossRefGoogle Scholar
  48. 48.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  49. 49.
    Miranda L, Vieira T, Martínez D, Lewiner T, Vieira AW, Campos MFM (2014) Online gesture recognition from pose kernel learning and decision forests. Pattern Recogn Lett 39:65–73CrossRefGoogle Scholar
  50. 50.
    Ni B, Nguyen CD, Moulin P (2012) RGBD-camera based get-up event detection for hospital fall prevention. In: Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 1405–1408Google Scholar
  51. 51.
    Ni B, Pei Y, Moulin P, Yan S (2013) Multilevel depth and image fusion for human activity detection. IEEE Transactions on Cybernetics 43(5):1383–1394CrossRefGoogle Scholar
  52. 52.
    Oneata D, Verbeek J, Schmid C (2013) Action and event recognition with Fisher vectors on a compact feature set. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 1817–1824Google Scholar
  53. 53.
    Padhy RP, Chang X, Choudhury SK, Sa PK, Bakshi S (2018) Multi-stage cascaded deconvolution for depth map and surface normal prediction from single image. Pattern Recognition Letters pp P–P (in press)Google Scholar
  54. 54.
    Peng X, Wang L, Wang X, Qiao Y (2016) Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Comput Vis Image Underst 150:109–125CrossRefGoogle Scholar
  55. 55.
    Piyathilaka L, Kodagoda S (2013) Human activity recognition for domestic robots. In: Proceedings of the international conference on field and service robotics (FSR), pp 395–408Google Scholar
  56. 56.
    Presti LL, Cascia ML (2016) 3D skeleton-based human action classification: a survey. Pattern Recogn 53:130–147CrossRefGoogle Scholar
  57. 57.
    Rahmani H, Mian A, Shah M (2018) Learning a deep model for human action recognition from novel viewpoints. IEEE Trans Pattern Anal Mach Intell 40(3):667–681CrossRefGoogle Scholar
  58. 58.
    Raman N, Maybank S (2015) Action classification using a discriminative multilevel HDP-HMM. Neurocomputing 154:149–161CrossRefGoogle Scholar
  59. 59.
    Ross AA, Govindarajan R (2005) Feature level fusion of hand and face biometrics. In: SPIE proceedings, pp 196–204Google Scholar
  60. 60.
    Sanchez-Riera J, Hua KL, Hsiao YS, Lim T, Hidayati SC, Cheng WH (2016) A comparative study of data fusion for RGB-d based visual recognition. Pattern Recogn Lett 73:1–6CrossRefGoogle Scholar
  61. 61.
    Scholkopf B, Sung KK, Burges CJC, Girosi F, Niyogi P, Poggio T, Vapnik V (1997) Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE Trans Signal Process 45(11):2758–2765CrossRefGoogle Scholar
  62. 62.
    Shahroudy A, Wang G, Ng TT (2014) Multi-modal feature fusion for action recognition in RGB-D sequences. In: Proceedings of the international symposium on communications, control and signal processing (ISCCSP), pp 1–4Google Scholar
  63. 63.
    Sharma P, Kaur M (2013) Multimodal classification using feature level fusion and SVM. Int J Comput Appl 76(4):26–32Google Scholar
  64. 64.
    Sung J, Ponce C, Selman B, Saxena A (2011) Human activity detection from RGBD images. In: Proceedings of the AAAI conference on plan, activity, and intent recognition (PAIR), pp 47–55Google Scholar
  65. 65.
    Sung J, Ponce C, Selman B, Saxena A (2012) Unstructured human activity detection from RGBD images. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), pp 842–849Google Scholar
  66. 66.
    Sykora P, Kamencay P, Hudec R (2014) Comparison of SIFT and SURF methods for use on hand gesture recognition based on depth map. AASRI Procedia 9:19–24CrossRefGoogle Scholar
  67. 67.
    Tripathi RK, Jalal AS, Agrawal SC (2018) Suspicious human activity recognition: a review. Artif Intell Rev 50(2):283–339CrossRefGoogle Scholar
  68. 68.
    Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 1290–1297Google Scholar
  69. 69.
    Wolf C, Mille J, Lombardi E, Celiktutan O, Jiu M, Dogan E, Eren G, Baccouche M, Dellandrea E, Bichot CE, Garcia C, Sankur B (2014) Evaluation of video activity localizations integrating quality and quantity measurements. Comput Vis Image Underst 127:14–30CrossRefGoogle Scholar
  70. 70.
    Xia L, Chen CC, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. In: Proceedings of the IEee conference on computer vision and pattern recognition workshops (CVPRW), pp 20–27Google Scholar
  71. 71.
    Xian Y, Rong X, Yang X, Tian Y (2017) Evaluation of low-level features for real-world surveillance event detection. IEEE Trans Circuits Syst Video Technol 27 (3):624–634CrossRefGoogle Scholar
  72. 72.
    Yan S, Xu D, Yang Q, Zhang L, Tang X, Zhang HJ (2005) Discriminant analysis with tensor representation. In: Proceedings of the conference on computer vision and pattern recognition (CVPR), vol 1, pp 526–532Google Scholar
  73. 73.
    Yao T, Wang Z, Xie Z, Gao J, Feng DD (2017) Learning universal multiview dictionary for human action recognition. Pattern Recogn 64:236–244CrossRefGoogle Scholar
  74. 74.
    Zhong G, Cheriet M (2014) Large margin low rank tensor analysis. Neural Comput 26(4):761–780MathSciNetCrossRefGoogle Scholar
  75. 75.
    Zhou X, Zhuang X, Yan S, Chang SF, Hasegawa-Johnson M, Huang TS (2008) SIFT-bag kernel for video event analysis. In: Proceedings of the ACM international conference on multimedia (MM), pp 229–238Google Scholar
  76. 76.
    Zhu Y, Chen W, Guo G (2013) Fusing spatiotemporal features and joints for 3D action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW), pp 486–491Google Scholar
  77. 77.
    Zhu Y, Chen W, Guo G (2014) Evaluating spatiotemporal interest point features for depth-based action recognition. Image Vis Comput 32(8):453–464CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Danilo Avola
    • 1
    • 2
    Email author
  • Marco Bernardi
    • 2
  • Gian Luca Foresti
    • 1
  1. 1.Department of Mathematics, Computer Science and PhysicsUniversity of UdineUdineItaly
  2. 2.Department of ComputerSapienza UniversityRomeItaly

Personalised recommendations