Skip to main content

Advertisement

Log in

Classification of Synthetic Aperture Radar-Ground Range Detected Image Using Advanced Convolution Neural Networks

  • Original Paper
  • Published:
Remote Sensing in Earth Systems Sciences Aims and scope Submit manuscript

Abstract

Synthetic aperture radar image data with advanced algorithms have opened up many layers of opportunities and applications in the field of SAR image processing such as individual object’s classification etc. This paper presents pixel-based convolution neural network (CNN) models that are useful for the high-resolution synthetic aperture radar (SAR) images of Ground Range Detected (GRD) data or image of city or town, agriculture and surrounding areas which contain other features for a complete, quick as well as the exact categorization. The land area which has been investigated in this paper is labeled into seven categories: viz., forest, water bodies, settlements, agriculture, vegetation, sand, and open area. The accuracy of the classification was compared to the other regions of land that have similar categories. An entire classification outcome on opted sections of the imagery would be showing that, pixels based convolutional neural network models of AlexNet, GoogLeNet, and ResNet50 are viable to solve both task of object identification and classification. The performance is evaluated in terms of training, testing accuracy, and kappa coefficient or confusion matrices for the GRD SAR image. The overall accuracies of greater than90% are achieved with the CNN techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. El-Darymli K, Gill EW, Mcguire P, Power D, Moloney C (2016) Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review. IEEE Access 4:6014–6058

    Article  Google Scholar 

  2. Shang R, Yuan Y, Jiao L, Hou B, Ghalamzan Esfahani AM, Stolkin R (2017) A fast algorithm for SAR image segmentation based on key pixels. IEEE J Sel Topics Appl Earth Obs Remote Sens 10(12):5657–5673

    Article  Google Scholar 

  3. Ding Y, Li Y, Yu W (2014) SAR image classification based on CRFs with integration of local label context and pair wise label compatibility. IEEE J Sel Topics Appl Earth Obs Remote Sens 7(1):300–306

    Article  Google Scholar 

  4. Novak LM, Owirka GJ, Brower WS, Weaver AL (1997) The automatic target-recognition system in SAIP. Lincoln Lab J 10(2):187–202

    Google Scholar 

  5. Novak LM, Owirka GJ, Brower WS (2000) Performance of 10- and 20-target MSE classifiers. IEEE Trans Aerosp Electron Syst 36(4):1279–1289

    Article  Google Scholar 

  6. Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  7. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  8. Krizhevsky, Sutskever I, Hinton G (2012) “Imagenet classification with deep convolutional neural networks.” In: Proceedings International Conference Neural Information Processing Systems, Lake Tahoe, NV, USA, pp 1106–1114

  9. Farabet C, Couprie C, Najman L, LeCun Y (2012) Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8):1–15

  10. Szegedy C, Wei Liu, Yangqing Jia (2014) “Going deeper with convolutions,” in Proc. IEEE Conf. Computer Vision. Pattern Recog, Boston, pp 1–12

  11. Zeiler MD, Fergus R (2013) “Visualizing and understanding convolutional networks.” International European Conference on Computer Vision, Springer, Cham, pp 1–11

  12. Karen Simonyan_ & Andrew Zisserman (2015) “Very deep convolutional networks for large-scale image recognition,” international Proceeding Conference Learn. Represent (ICLR-15), pp 1–14

  13. He K, Zhang X, Ren S, Sun J (2016a) “Deep residual learning for image recognition,” In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp 770–778

  14. Chen S, Wang H, Xu F, Jin YQ (2016) Target classification using the deep convolutional networks for SAR images. IEEE Trans Geosci Remote Sens 54(8):4806–4817

    Article  Google Scholar 

  15. Sharif Razavian A, Azizpour H, Sullivan J, (2014) ‘CNN features off-the shelf: an astounding baseline for recognition’. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, pp 806–813

  16. Shang R, Member, IEEE, Wang J, Jiao L (2018) Fellow, IEEE. “SAR targets classification based on deep memory convolution neural networks and transfer parameters.” IEEE J Select Topics Appl Earth Observ Remote Sens 11(8):1–13

  17. Marmanis D, Datcu M, Esch T, Stilla U (2016) Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci Remote Sens Lett 13(1):105–109

    Article  Google Scholar 

  18. Tinghe Z, Yuhui L, Qiankun Y, Hong H, Tao F (2017) “Integrating saliency and ResNet for airport detection in large-size remote sensing images,” in Proc. 2nd International Conference on Image, Vision and Computing (ICIVC), Chengdu, pp 20–25

  19. Xavier G, Yoshua B (2010) “Understanding the difficulty of training deep feedforward neural networks” International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, pp 249–256

  20. Rezende E, Ruppert G, Carvalho T, Ramos F (2017) “Malicious software classification using transfer learning of ResNet-50 deep neural network.” In: Proceedings 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, pp 1011–1014

  21. Jung H, Choi M-K, Jung J, Lee J-H, Kwon S, Jung WY (2017) “ResNet-based vehicle classification and localization in traffic surveillance systems.” In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, Workshops, pp 934–940

  22. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov (2014) “Dropout a simple way to prevent neural networks from overfitting” Journal of Machine Learning Research 15 (56) 1929–1958

  23. R. Girshick, “Fast R-CNN,” In: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp 1440–1448

  24. Ren S, He K, Girshick R, Sun J (2015) “Faster R-CNN: towards real-time object detection with region proposal networks,” in Adv Neural Inf Proces Syst, pp 91–99

  25. Xiong C, Zhao X, Tang D, Jayashree K, Yan S, Kim TK. “Conditional convolutional neural network for modality-aware face recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp 3667–3675

  26. J. Long, E. Shelhamer, T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp 3431–3440

  27. Zheng L, Wang S, Wang JJ, Tian Q (2016) Accurate image search with multiscale contextual evidences. Int J Comput Vis 120(1):1–13

    Article  Google Scholar 

  28. W. Ku, H. Chou, W. Peng, “Discriminatively-learned global image representation using CNN as a local feature extractor for image retrieval,” in Visual Communications and Image Processing, 2015, pp 1–4

  29. L. Zheng, S. Wang, L. Tian, F. He, Z. Liu, Q. Tian, “Query-adaptive late fusion for image search and person re-identification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp 1741–1750

  30. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):221–252

    Article  Google Scholar 

  31. Hoo-Chang Shin. Member, IEEE, Holger R. Roth, Mingchen Gao, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning” IEEE Transactions On Medical Imaging, Vol. 35, No. 5, May 2016 PP-1297–1285

Download references

Acknowledgments

The authors would like to thank the assistant editor and anonymous reviewers for the feedback and suggestions, which helps us significantly further to improve the technical quality and presentation of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Battula Balnarsaiah.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balnarsaiah, B., Prasad, T.S. & Laxminarayana, P. Classification of Synthetic Aperture Radar-Ground Range Detected Image Using Advanced Convolution Neural Networks. Remote Sens Earth Syst Sci 4, 13–29 (2021). https://doi.org/10.1007/s41976-020-00042-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41976-020-00042-x

Keywords

Navigation