Advertisement

Enabling More Accurate Bounding Boxes for Deep Learning-Based Real-Time Human Detection

  • Hyunsu Jeong
  • Jeonghwan Gwak
  • Cheolbin Park
  • Manish Khare
  • Om Prakash
  • Jong-In Song
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 524)

Abstract

While human detection has been significantly recognized and widely used in many areas, the importance of human detection for behavioral analysis in medical research has been rarely reported. Recently, however, efforts have been actively made to recognize behavior diseases by measuring gait variability using pattern analysis of human detection results from videos taken by cameras. For this purpose, it is very crucial to establish robust human detection algorithms. In this work, we modified deep learning models by changing multi-detection into human detection. Also, we improved the localization of human detection by adjusting the input image according to the ratio of objects in an image and improving the results of several bounding boxes by interpolation. Experimental results demonstrated that by adopting the proposals, the accuracy of human detection could be increased significantly.

Keywords

Human detection Deep learning Bounding box regression Localization Real-time analysis 

Notes

Acknowledgements

This work was supported by the Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2016M3C7A1905477, NRF-2014M3C7A1046050) and the Basic Science Research Program through the NRF funded by the Ministry of Education (NRF-2017R1D1A1B03036423). This study was approved by the Institutional Review Board of Gwangju Institute of Science and Technology (IRB no. 20180629-HR-36-07-04). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.

References

  1. 1.
    Nguyen, D., & Li, W. (2016). Human detection from images and videos: a survey. Pattern Recognition, 51, 148–175.  https://doi.org/10.1016/j.patcog.2015.08.027.CrossRefGoogle Scholar
  2. 2.
    Felzenszwalb, P. (2008). A discriminatively trained, multiscale, deformable part model. In 10th IEEE International Symposium on High Performance Distributed Computing (pp. 1–8). New York: IEEE Press.  https://doi.org/10.1109/cvpr.2008.4587597.
  3. 3.
    Krizhevsky, A., Sutskever, I., & Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25 (pp. 1097–1105). New York: Curran Associates, Inc.  https://doi.org/10.1145/3065386.CrossRefGoogle Scholar
  4. 4.
    Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 580–587). New York: IEEE Press.  https://doi.org/10.1109/cvpr.2014.81.
  5. 5.
    Girshick, R. (2015). Fast R-CNN. In IEEE International Conference on Computer Vision (pp. 1440–1448). New York: IEEE press.  https://doi.org/10.1109/iccv.2015.169.
  6. 6.
    Ren, S., He, K., & Girshick, R. (2016). Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 1137–1149.  https://doi.org/10.1109/TPAMI.2016.2577031.CrossRefGoogle Scholar
  7. 7.
    Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (pp. 779–788). New York: IEEE Press.  https://doi.org/10.1109/cvpr.2016.91.
  8. 8.
    Gwak, J., Park, G., & Jeon, M. (2017). Viewpoint invariant person re-identification for global multi-object tracking with non-overlapping cameras. KSII Transactions on Internet and Information Systems, 11, 2075–2092.Google Scholar
  9. 9.
    Gwak, J. (2017). Multi-object tracking through learning relational appearance features and motion patterns. Computer Vision and Image Understanding, 162, 103–115.CrossRefGoogle Scholar
  10. 10.
    Yang, E., Gwak, J., & Jeon, M. (2017). CRF-boosting: Constructing a robust online hybrid boosting multiple object trackers facilitated by CRF learning. Sensors, 17, 617:1–617:18.CrossRefGoogle Scholar
  11. 11.
    Yang, E., Gwak, J., & Jeon, M. (2017). Multi-human tracking using part-based appearance modelling and grouping-based tracklet association for visual surveillance applications. Multimedia Tools and Applications, 76, 6731–6754.CrossRefGoogle Scholar
  12. 12.
    Prakash, O., Gwak, J., Khare, M., Khare, A., & Jeon, M. (2018). Human detection in complex real scenes based on combination of biorthogonal wavelet transform and Zernike moments. Optik—International Journal for Light and Electron Optics, 157, 1267–1281.CrossRefGoogle Scholar
  13. 13.
    Yu, H., Riskowski, J., & Brower, R. (2009). Gait variability while walking with three different speeds. In: 2009 IEEE International Conference on Rehabilitation Robotics (pp. 823–827). New York: IEEE Press.  https://doi.org/10.1109/icorr.20095209486.
  14. 14.
    Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C., et al. (2016). SSD: Single shot multibox detector. In European Conference on Computer Vision (pp. 21–37). Cham: Springer.  https://doi.org/10.1007/978-3-319-46448-0_2.CrossRefGoogle Scholar
  15. 15.
    Lam, K. Y., Tsang, N. W. H., & Han, S. (2017). Activity tracking and monitoring of patients with alzheimer’s disease. Multimedia Tools and Applications, 76, 489–521.  https://doi.org/10.1007/s11042-015-3047-x.CrossRefGoogle Scholar
  16. 16.
    Mega, S., & Gornbein, F. (1996). The spectrum of behavioral changes in Alzheimer’s disease. Neurology, 46, 130–135.  https://doi.org/10.1212/WNL.46.1.130.CrossRefGoogle Scholar
  17. 17.
    Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (pp. 6517–6525). New York: IEEE Press.  https://doi.org/10.1109/cvpr.2017.690.
  18. 18.
    Huang, J., Rathod, V., Sun, C., & Zhu, M. (2017). Speed/accuracy trade-offs for modern convolutional object detectors. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (pp. 3296–3297). New York: IEEE Press.  https://doi.org/10.1109/cvpr.2017.351.

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of Electrical Engineering and Computer ScienceGwangju Institute of Science and TechnologyGwangjuKorea
  2. 2.Biomedical Research InstituteSeoul National University Hospital (SNUH)SeoulKorea
  3. 3.Department of RadiologySeoul National University Hospital (SNUH)SeoulKorea
  4. 4.Dhirubhai Ambani Institute of Information and Communication TechnologyGandhinagarIndia
  5. 5.Centre of Computer Education, Institute of Professional StudiesUniversity of AllahabadAllahabadIndia

Personalised recommendations