Advertisement

Pedestrian Tracking in the Compressed Domain Using Thermal Images

  • Ichraf LahouliEmail author
  • Robby Haelterman
  • Zied Chtourou
  • Geert De Cubber
  • Rabah Attia
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 842)

Abstract

The video surveillance of sensitive facilities or borders poses many challenges like the high bandwidth requirements and the high computational cost. In this paper, we propose a framework for detecting and tracking pedestrians in the compressed domain using thermal images. Firstly, the detection process uses a conjunction between saliency maps and contrast enhancement techniques followed by a global image content descriptor based on Discrete Chebychev Moments (DCM) and a linear Support Vector Machine (SVM) as a classifier. Secondly, the tracking process exploits raw H.264 compressed video streams with limited computational overhead. In addition to two, well-known, public datasets, we have generated our own dataset by carrying six different scenarios of suspicious events using a thermal camera. The obtained results show the effectiveness and the low computational requirements of the proposed framework which make it suitable for real-time applications and onboard implementation.

Notes

Acknowledgment

The generation of the proposed dataset using thermal cameras is supported by MIRTECHNOLOGIES SA, Chemin des Eysines 51, 1226 Nyon, CH.

References

  1. 1.
    Wu, S., Oreifej, O., Shah, M.: Action recognition in videos acquired by a moving camera using motion decomposition of Lagrangian particle trajectories. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1419–1426. IEEE (2011)Google Scholar
  2. 2.
    Wang, H., Schmid, C.: Action recognition with improved trajectories. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3551–3558 (2013)Google Scholar
  3. 3.
    Park, S.-M., Lee, J.: Object tracking in mpeg compressed video using mean-shift algorithm. In: Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing and Fourth Pacific Rim Conference on Multimedia, vol. 2, pp. 748–752. IEEE (2003)Google Scholar
  4. 4.
    Babu, R.V., Ramakrishnan, K., Srinivasan, S.: Video object segmentation: a compressed domain approach. IEEE Trans. Circuits Syst. Video Technol. 14(4), 462–474 (2004)CrossRefGoogle Scholar
  5. 5.
    Yeo, C., Ahammad, P., Ramchandran, K., Sastry, S.S.: Compressed domain real-time action recognition. In: 2006 IEEE 8th Workshop on Multimedia Signal Processing, pp. 33–36. IEEE (2006)Google Scholar
  6. 6.
    Biswas, S., Babu, R.V.: H.264 compressed video classification using histogram of oriented motion vectors (HOMV). In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2040–2044. IEEE (2013)Google Scholar
  7. 7.
    Käs, C., Nicolas, H.: An approach to trajectory estimation of moving objects in the H.264 compressed domain. In: Wada, T., Huang, F., Lin, S. (eds.) PSIVT 2009. LNCS, vol. 5414, pp. 318–329. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-540-92957-4_28CrossRefGoogle Scholar
  8. 8.
    Kantorov, V., Laptev, I.: Efficient feature extraction, encoding and classification for action recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2593–2600 (2014)Google Scholar
  9. 9.
    Zhang, B., Wang, L., Wang, Z., Qiao, Y., Wang, H.: Real-time action recognition with enhanced motion vector CNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2718–2726 (2016)Google Scholar
  10. 10.
    Poularakis, S., Avgerinakis, K., Briassouli, A., Kompatsiaris, I.: Efficient motion estimation methods for fast recognition of activities of daily living. Sig. Process. Image Commun. 53, 1–12 (2017)CrossRefGoogle Scholar
  11. 11.
    Avgerinakis, K., Briassouli, A., Kompatsiaris, I.: Recognition of activities of daily living for smart home environments. In: 2013 9th International Conference on Intelligent Environments (IE), pp. 173–180. IEEE (2013)Google Scholar
  12. 12.
    Karakasis, E., Bampis, L., Amanatiadis, A., Gasteratos, A., Tsalides, P.: Digital elevation model fusion using spectral methods. In: 2014 IEEE International Conference on Imaging Systems and Techniques (IST) Proceedings, pp. 340–345. IEEE (2014)Google Scholar
  13. 13.
    Davis, J.W., Keck, M.A.: A two-stage template approach to person detection in thermal imagery. In: Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS 2005, vol. 1, pp. 364–369, January 2005Google Scholar
  14. 14.
    Torabi, A., Massé, G., Bilodeau, G.-A.: An iterative integrated framework for thermal-visible image registration, sensor fusion, and people tracking for video surveillance applications. Comput. Vis. Image Underst. 116, 210–221 (2012)CrossRefGoogle Scholar
  15. 15.
    Arodź, T., Kurdziel, M., Popiela, T.J., Sevre, E.O., Yuen, D.A.: Detection of clustered microcalcifications in small field digital mammography. Comput. Methods Programs Biomed. 81(1), 56–65 (2006)CrossRefGoogle Scholar
  16. 16.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2009), pp. 1597–1604 (2009)Google Scholar
  17. 17.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo from maximally stable extremal regions. Image Vis. Comput. 22(10), 761–767 (2004)CrossRefGoogle Scholar
  18. 18.
    Tun, W.N., Tyan, M., Kim, S., Nah, S.-H., Lee, J.-W.: Marker tracking with AR drone for visual-based navigation using SURF and MSER algorithms, pp. 124–125 (2017)Google Scholar
  19. 19.
    Sun, X., Ding, J., Dalla Chiara, G., Cheah, L., Cheung, N.-M.: A generic framework for monitoring local freight traffic movements using computer vision-based techniques. In: 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), pp. 63–68. IEEE (2017)Google Scholar
  20. 20.
    Kumar, A., Gupta, S.: Detection and recognition of text from image using contrast and edge enhanced mser segmentation and OCR. IJOSCIENCE (Int. J. Online Sci.) Impact Factor: 3.462 3(3), 07 (2017)Google Scholar
  21. 21.
    Khosravi, M., Hassanpour, H.: A novel image structural similarity index considering image content detectability using maximally stable extremal region descriptor. Int. J. Eng. Trans. B: Appl. 30(2), 172 (2017)Google Scholar
  22. 22.
    Alyammahi, S.M.R., Salahat, E.N., Saleh, H.H.M., Sluzek, A.S., Elnaggar, M.I.: Hardware architecture for linear-time extraction of maximally stable external regions (MSERs). US Patent 9,740,947, August 22 2017Google Scholar
  23. 23.
    Śluzek, A.: MSER and SIMSER regions: a link between local features and image segmentation. In: Proceedings of the 2017 International Conference on Computer Graphics and Digital Image Processing, p. 15. ACM (2017)Google Scholar
  24. 24.
    Ma, Y., Wu, X., Yu, G., Xu, Y., Wang, Y.: Pedestrian detection and tracking from low-resolution unmanned aerial vehicle thermal imagery. Sensors 16(4), 446 (2016)CrossRefGoogle Scholar
  25. 25.
    Bernardin, K., Stiefelhagen, R.: Evaluating multiple object tracking performance: the CLEAR MOT metrics. EURASIP J. Image Video Process. 2008, 246309 (2008)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ichraf Lahouli
    • 1
    • 2
    • 3
    Email author
  • Robby Haelterman
    • 1
  • Zied Chtourou
    • 2
  • Geert De Cubber
    • 1
  • Rabah Attia
    • 3
  1. 1.Royal Military AcademyBrusselsBelgium
  2. 2.VRIT LabMilitary Academy of TunisiaNabeulTunisia
  3. 3.SERCOM LabTunisia Polytechnic SchoolLa MarsaTunisia

Personalised recommendations