Advertisement

Signal, Image and Video Processing

, Volume 13, Issue 8, pp 1629–1637 | Cite as

Target tracking and classification using compressive sensing camera for SWIR videos

  • Chiman KwanEmail author
  • Bryan Chou
  • Jonathan Yang
  • Akshay Rangamani
  • Trac Tran
  • Jack Zhang
  • Ralph Etienne-Cummings
Original Paper

Abstract

The pixel-wise code exposure (PCE) camera is a compressive sensing camera that has several advantages, such as low power consumption and high compression ratio. Moreover, one notable advantage is the capability to control individual pixel exposure time. Conventional approaches of using PCE cameras involve a time-consuming and lossy process to reconstruct the original frames and then use those frames for target tracking and classification. Otherwise, conventional approaches will fail if compressive measurements are used. In this paper, we present a deep learning approach that directly performs target tracking and classification in the compressive measurement domain without any frame reconstruction. Our approach has two parts: tracking and classification. The tracking has been done via detection using You Only Look Once (YOLO), and the classification is achieved using residual network (ResNet). Extensive simulations using short-wave infrared (SWIR) videos demonstrated the efficacy of our proposed approach.

Keywords

Compressive measurement Pixel-wise code exposure (PCE) camera Multi-target tracking and classification SWIR 

Notes

Acknowledgements

This research was supported by the US Air Force under contract FA8651-17-C-0017. The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the US Government. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited.

References

  1. 1.
    Zhao, Z., Chen, H., Chen, G., Kwan, C., Li, X. R.: Comparison of several ballistic target tracking filters. In: Proceedings of American Control Conference, pp. 2197–2202 (2006)Google Scholar
  2. 2.
    Zhao, Z., Chen, H., Chen, G., Kwan, C., Li, X. R.: IMM-LMMSE filtering algorithm for ballistic target tracking with unknown ballistic coefficient. In: Proceedings of SPIE, Volume 6236, Signal and Data Processing of Small Targets (2006)Google Scholar
  3. 3.
    Zhou, J., Kwan, C.: Anomaly detection in low quality traffic monitoring videos using optical flow. In: Proceedings of SPIE 10649, Pattern Recognition and Tracking XXIX (2018)Google Scholar
  4. 4.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.: Staple: complementary learners for real-time tracking. In: Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  5. 5.
    Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2, 2246–2252 (1999)Google Scholar
  6. 6.
    Kwan, C., Zhou, J., Wang, Z., Li, B.: Efficient anomaly detection algorithms for summarizing low quality videos. In: Proceedings of SPIE 10649, Pattern Recognition and Tracking XXIX, vol. 1064906 (2018)Google Scholar
  7. 7.
    Kwan, C., Yin, J., Zhou, J.: The development of a video browsing and video summary review tool. In: Proceedings of SPIE 10649, Pattern Recognition and Tracking XXIX, vol. 1064907 (2018)Google Scholar
  8. 8.
    Kandylakis, Z., et al.: Multimodal data fusion for effective surveillance of critical infrastructure. In: ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 87–93 (2017)CrossRefGoogle Scholar
  9. 9.
    Berg, A.: Detection and Tracking in Thermal Infrared Imagery. Linköping University Electronic Press, Diss (2016)CrossRefGoogle Scholar
  10. 10.
    Zhou, J., Kwan, C.: Tracking of multiple pixel targets using multiple cameras. In: 15th International Symposium on Neural Networks (2018)CrossRefGoogle Scholar
  11. 11.
    Kwan, C., Chou, B., Kwan, L. M.: A comparative study of conventional and deep learning target tracking algorithms for low quality videos. In: 15th International Symposium on Neural Networks (2018)CrossRefGoogle Scholar
  12. 12.
    Candes, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Proc. Mag. 25, 21–30 (2008)CrossRefGoogle Scholar
  13. 13.
    Zhang, J., Xiong, T., Tran, T., Chin, S., Etienne-Cummings, R.: Compact all-CMOS spatio-temporal compressive sensing video camera with pixel-wise coded exposure. Opt. Express 24(8), 9013–9024 (2016)CrossRefGoogle Scholar
  14. 14.
    Yang, J., Zhang, Y.: Alternating direction algorithms for l 1-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Dao, M., Kwan, C., Koperski, K., Marchisio, G.: A joint sparsity approach to tunnel activity monitoring using high resolution satellite images. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, pp. 322–328, New York City (2017)Google Scholar
  17. 17.
    Zhou, J., Ayhan, B., Kwan, C., Tran, T.: ATR performance improvement using images with corrupted or missing pixels. In: Proceedings of SPIE 10649, Pattern Recognition and Tracking XXIX, vol. 106490E (2018)Google Scholar
  18. 18.
    Applied Research LLC, Phase 1 Final report, August 2016Google Scholar
  19. 19.
    Yang, M.H., Zhang, K., Zhang, L.: Real-time compressive tracking. In: European Conference on Computer Vision (2012)Google Scholar
  20. 20.
    Kwan, C., Chou, B., Echavarren, A., Budavari, B., Li, J., Tran, T.: Compressive vehicle tracking using deep learning. In: IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York City (2018)Google Scholar
  21. 21.
    Redmon, J., Farhadi, A.: YOLOv3: An Incremental Improvement (2018). arXiv:1804.02767
  22. 22.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems (2015)Google Scholar
  23. 23.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  24. 24.

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Applied Research LLCRockvilleUSA
  2. 2.Johns Hopkins UniversityBaltimoreUSA
  3. 3.MITCambridgeUSA

Personalised recommendations