Skip to main content
Log in

Image Multi-human Behavior Analysis Based on Low Rank Texture Direction

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

The main task of computer vision is to understand the behavioral meaning of the target in the image or video. In this paper, we present a method of human behavior analysis based on low rank texture direction. Faster-RCNN network model is chose to locate multiple human targets in complex scene images, also the global contrast saliency method was used to highlight the human body’s target and weaken the background information in an image. Transform Invariant Low Rank Textures(TITL) method was used to extract the target of Low rank texture and get the goal of overall Low rank texture direction as human behavior descriptor. In this paper, the target behavior analysis experiment was carried out under a public dataset. The experiment including the human body segmentation and storage area, calibration of the direction of the behavior of how the human body target in the video. The calculation of human behavior trend is carried out as well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9

Similar content being viewed by others

References

  1. Li, L., & Dai, S. (2016). Action recognition based on local fisher discriminant analysis and mix encoding. Virtual Reality and Visualization (ICVRV), 9, 16–23.

    Google Scholar 

  2. Wang, M., Sun, J., & Yu, J. (2016). Human action recognition based on feature level fusion and random projection. In 2016 5th international conference on computer science and network technology (ICCSNT) (pp. 767–770).

  3. Khare, M., Gwak, J., & Jeon, M. (2017). Complex wavelet transform-based approach for human action recognition in video. In 2017 international conference on control, automation and information sciences (ICCAIS) (pp. 157–162).

  4. Ali, S., Basharat, A., & Shah, M. (2007). Chaotic invariants for human action recognition. In IEEE 11th international conference on computer vision, 2007. ICCV 2007 (pp. 1–8). https://doi.org/10.1109/ICCV.2007.4409046.

  5. Liu, J., Wang, G., Duan, L.Y., et al. (2017). Skeleton based human action recognition with global context-aware attention LSTM networks. Computer Vision and Pattern Recognition.

  6. Huang, M., Su, S.-Z., Cai, G.-R., et al. (2017). Meta-action descriptor for action recognition in RGBD video. IET Comput. Vis., 11(4), 301–308.

    Article  Google Scholar 

  7. Wang, H.-B., & Lv, H. (2016). Salient object detection with fixation priori. In 2016 international conference on machine learning and cybernetics (ICMLC) (pp. 285–289).

  8. Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE conference on computer vision and pattern recognition (CVPR).

  9. Girshick, R. (2015). Fast R-CNN. In IEEE international conference on computer vision (ICCV).

  10. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Neural information processing systems (NIPS).

  11. Liu, W., Anguelov, D., Erhan, D., et al. (2016). SSD: single shot multiBox detector. In European conference on computer vision (pp. 21–37). https://doi.org/10.1007/978-3-319-46448-0_2.

  12. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In The IEEE conference on computer vision and pattern recognition (CVPR).

  13. Galteri, L., Seidenari, L., & Bertini, M. (2017). Spatio-temporal closed-loop object detection. IEEE Transactions on Image Processing, 26(3), 1253–1263.

    Article  MathSciNet  Google Scholar 

  14. Kang, K., Ouyang, W., Li, H., & Wang, X. (2016). Object detection from video tubelets with convolutional neural networks. In The IEEE conference on computer vision and pattern recognition (CVPR).

  15. Lu, C., Lu, Y., & Tang, C.-K. (2017). Online video object detection using association LSTM. In IEEE international conference on computer vision (ICCV).

  16. Seyedhosseini, M., & Tasdizen, T. (2016). Semantic image segmentation with contextual hierarchical models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(5), 951–964.

    Article  Google Scholar 

  17. Shelhamer, E., Long, J., & Darrell, T. (2017). Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640–651.

    Article  Google Scholar 

  18. Su, W., & Wang, Z. Regularized fully convolutional networks for RGB-D semantic segmentation. In 2016 Visual Communications and Image Processing (VCIP) (pp. 1–4).

  19. Qiu, M., Chen, Z., Niu, J., Zong, Z., et al. (2015). Data allocation for hybrid memory with genetic algorithm. IEEE Transactions on Emerging Topics on Computing, 3(4), 544–555.

    Article  Google Scholar 

  20. Gai, K., Qiu, M., Liu, M., et al. (2018). In-memory big data analytics under space constraints using dynamic programming. Future Generation Computer Systems.

  21. Gai, K, Qiu, M, Zhao, H, et al. (2016). Dynamic energy-aware cloudlet-based mobile cloud computing model for green computing. Journal of Network & Computer Applications, 59(C), 46–54.

    Article  Google Scholar 

  22. Gai, K., Qiu, M., & Zhao, H. (2018). Energy-aware task assignment for mobile cyber-enabled applications in heterogeneous cloud computing. Journal of Parallel and Distributed Computing, 111, 126–135.

    Article  Google Scholar 

  23. Gai, K., & Qiu, M. (2017). Blend arithmetic operations on tensor-based fully homomorphic encryption over real numbers. IEEE Transactions on Industrial Informatics, PP(99), 1.

    Article  Google Scholar 

  24. Cheng, M., Zhang, G., Mitra, N.J., Huang, X., & Hu, S. (2011). Global contrast based salient region detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 37(3), 409–416.

    Google Scholar 

  25. Zhangy, ZD, Liangy, X, Ganeshz, A, & Ma, Y. (2011). TILT transform invariant low-rank textures. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2321–2328).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mengjun Li.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M., Zhang, G. Image Multi-human Behavior Analysis Based on Low Rank Texture Direction. J Sign Process Syst 90, 1245–1255 (2018). https://doi.org/10.1007/s11265-018-1344-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-018-1344-0

Keywords

Navigation