Advertisement

A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning

  • T. Nathan MundhenkEmail author
  • Goran Konjevod
  • Wesam A. Sakla
  • Kofi Boakye
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9907)

Abstract

We have created a large diverse set of cars from overhead images (Data sets, annotations, networks and scripts are available from http://gdo-datasci.ucllnl.org/cowc/), which are useful for training a deep learner to binary classify, detect and count them. The dataset and all related material will be made publically available. The set contains contextual matter to aid in identification of difficult targets. We demonstrate classification and detection on this dataset using a neural network we call ResCeption. This network combines residual learning with Inception-style layers and is used to count cars in one look. This is a new way to count objects rather than by localization or density estimation. It is fairly accurate, fast and easy to implement. Additionally, the counting method is not car or scene specific. It would be easy to train this method to count other kinds of objects and counting over new scenes requires no extra set up or assumptions about object locations.

Keywords

Deep Learning CNN COWC Context Cars Automobile Classification Detection Counting 

Notes

Acknowledgments

This work was funded from the NA-22 project at Lawrence Livermore National Laboratory’s Global Security directorate. Thanks to ISPRS, DGPF and BSF Swissphoto for permission to use their data.

References

  1. 1.
    Crawford, J.: Beyond supply and demand: making the invisible hand visible. In: Re-Work Deep Learning Summit, San Francisco, January 2016Google Scholar
  2. 2.
    Tanner, F., Colder, B., Pullen, C., Heagy, D., Eppolito, M., Carlan, V., Oertel, C., Sallee, P.: Overhead imagery research data set: an annotated data library and tools to aid in the developement of computer vision algorithms. In: IEEE Applied Imagery Pattern Recognition Workshop (2009)Google Scholar
  3. 3.
    Razakarivony, S., Jurie, F.: Vehicle detection in aerial imegery: A small target detection benchmark. Journal of Visual Communication and Image Representation, December 2015. \(<\)hal-01122605v2\(>\) Google Scholar
  4. 4.
    Utah Automated Geographic Reference Center (AGRC): Utah 2012 HRO 6 inch orthophotography data. http://gis.utah.gov/data/aerial-photography/
  5. 5.
    International Society for Photogrammetry and Remote Sensing (ISPRS): WG3 Toronto overhead data. http://www2.isprs.org/commissions/comm3/wg4/tests.html
  6. 6.
    Land Information New Zealand (LINZ): Selwyn 0.125m urban aerial photos index tiles (2012–2013). https://data.linz.govt.nz/layer/1926-selwyn-0125m-urban-aerial-photos-2012-13/
  7. 7.
    International Society for Photogrammetry and Remote Sensing (ISPRS) and BSF Swissphoto: WG3 Potsdam overhead data. http://www2.isprs.org/commissions/comm3/wg4/tests.html
  8. 8.
    International Society for Photogrammetry and Remote Sensing (ISPRS) and the German Society of Photogrammetry, Remote Sensing and Geoinformation (DGPF): WG3 Vaihingen overhead data. http://www2.isprs.org/commissions/comm3/wg4/tests.html
  9. 9.
    United States Air Force Research Lab (AFRL): Columbus surrogate unmanned aerial vehicle (CSUAV) dataset. https://www.sdms.afrl.af.mil/index.php?collection=csuav
  10. 10.
    Chen, X., Xiang, S., Liu, C.L., Pan, C.H.: Vehicle detection in satellite images by parallel deep convolutional neural networks. In: Second IAPR Asian Conference on Pattern Recognition (2013)Google Scholar
  11. 11.
    Moranduzzo, T., Melgani, F.: Automatic car counting method for unmanned aerial vehicle images. IEEE Trans. Geosci. Remote Sens. 52(3), 1635–1647 (2014)CrossRefGoogle Scholar
  12. 12.
    Holt, A.C., Seto, E.Y.W., Rivard, T., Gong, P.: Object-based detection and classification of vehicles from high-resolution aerial photography. Photogram. Eng. Remote Sens. 75(7), 871–880 (2009)CrossRefGoogle Scholar
  13. 13.
    Kamenetsky, D., Sherrah, J.: Aerial car detection and urban understanding. In: IEEE Conference on Digital Image Computing: Techniques and Applications (DICTA) (2015)Google Scholar
  14. 14.
    Zhang, C., Li, H., Wang, X., Yang, X.: Cross-scene crowd counting via deep convolutional neural networks. In: CVPR (2015)Google Scholar
  15. 15.
    Arteta, C., Lempitsky, V., Noble, J.A., Zisserman, A.: Interactive object counting. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part III. LNCS, vol. 8691, pp. 504–518. Springer, Heidelberg (2014)Google Scholar
  16. 16.
    Lempitsky, V., Zisserman, A.: Learning to count objects in images. In: NIPS (2010)Google Scholar
  17. 17.
    French, G., Fisher, M.H., Mackiewicz, M., Needle, C.L.: Convolutional neural network for counting fish in fisheries surveillance video. In: BMVC (2015)Google Scholar
  18. 18.
    Idrees, H., Saleemi, I., Seibert, C., Shah, M.: Multi-source multi-scale counting in extremely dense crowd images. In: CVPR (2013)Google Scholar
  19. 19.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. arXiv preprint arXiv:1506.02640 (2015)
  20. 20.
    Segue, S., Pujol, O., Vitria, J.: Learning to count with deep object features. In: CVPR (2015)Google Scholar
  21. 21.
    Wang, C., Zhang, H., Yang, L., Liu, S., Cao, X.: Deep people counting in extremely dense crowds. In: Proceedings of the 23rd Annual ACM Conference on Multimedia (2015)Google Scholar
  22. 22.
    Kelly, J., Sukhatme, G.S.: Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration. In: CIRA (2009)Google Scholar
  23. 23.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2013)Google Scholar
  24. 24.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  25. 25.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015)Google Scholar
  26. 26.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
  27. 27.
    Szegedy, C., Sergey Ioffe, V.V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv:1602.07261 (2016)
  28. 28.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  29. 29.
    Wu, Y., Yang, M.H., Lim, J.: Online object tracking: a benchmark. In: CVPR (2013)Google Scholar
  30. 30.
    LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
  31. 31.
    Brust, C.A., Sickert, S., Simon, M., Rodner, E., Denzler, J.: Efficient convolutional patch networks for scene understanding. In: CVPR Scene Understanding Workshop (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • T. Nathan Mundhenk
    • 1
    Email author
  • Goran Konjevod
    • 1
  • Wesam A. Sakla
    • 1
  • Kofi Boakye
    • 1
  1. 1.Computational Engineering DivisionLawrence Livermore National LaboratoryLivermoreUSA

Personalised recommendations