Geospatial Object Partition Based on Angular Second Moment Kernel

  • Nengyuan Liu
  • Chuan Lu
  • Ming Zhang
  • Zongyong Cui
  • Zongjie Cao
  • Rui Min
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 463)


Improvements in Synthetic Aperture Radar (SAR) image collection has revealed the ability to semantically describe scene complexity and abundant details. It is difficult for the traditional pixel-based methods to partition geospatial objects. A practical geospatial object partition method based on angular second moment kernel (ASMK) for SAR image is proposed. Firstly, a new kernel termed ASMK is designed in order to obtain accurate classification results. Then, based on classification results, river and urban areas as typical geospatial objects are partitioned. In order to obtain urban border accurately, a likelihood function to evaluate the possibility that one pixel belongs to urban area is established. Results of experiments with high-resolution TerraSAR-X spotlight data of the urban of Rosenheim in Germany demonstrate that the proposed method’s effectivity and accuracy in object partition.


Synthetic Aperture Radar SAR geospatial object partition Bag Of Visual Words (BOVWs) 



This study was supported by the Fundamental Research Funds for the Central Universities A03013023601005, the National Nature Science Foundation of China under Grant 61271287, Grant 61301265 and Grant U1433113, Sichuan province science and technology support plan 2015GZ0109, Shuangliu project of Research Institute in Chengdu, University of Electronic Science and Technology of China RW20140005.


  1. 1.
    Zhu, H., Li, C., Zhang, L., Shen, J.: River channel extraction from SAR images by combining gray and morphological features. Circ. Syst. Signal Process. 34(7), 2271–2286 (2015)Google Scholar
  2. 2.
    Matgen, P., Hostache, R., Schumann, G., Pfister, L., Hoffmann, L., Savenije, H.: Towards an automated SAR-based flood monitoring system: lessons learned from two case studies. Phys. Chem. Earth Parts A/B/C 36(7), 241–252 (2011)Google Scholar
  3. 3.
    Sghaier, M.O., Foucher, S., Lepage, R.: River extraction from high-resolution SAR images combining a structural feature set and mathematical morphology. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 10(3), 1025–1038 (2017)Google Scholar
  4. 4.
    Li, Y., Tan, Y., Deng, J., Wen, Q., Tian, J.: Cauchy graph embedding optimization for built-up areas detection from high-resolution remote sensing images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 8(5), 2078–2096 (2015)Google Scholar
  5. 5.
    Yang, W., Yin, X., Song, H., Liu, Y., Xu, X.: Extraction of built-up areas from fully polarimetric SAR imagery via PU learning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 7(4), 1207–1216 (2014)Google Scholar
  6. 6.
    Shi, H., Chen, L., Bi, F.K., Chen, H., Yu, Y.: Accurate urban area detection in remote sensing images. IEEE Geosci. Remote Sens. Lett. 12(9), 1948–1952 (2015)Google Scholar
  7. 7.
    Jia, G., Xu, R., Hu, Y., He, Y.: Multi-scale remote sensing estimates of urban fractions and road widths for regional models. Clim. Change 129(3–4), 543–554 (2015)Google Scholar
  8. 8.
    Cagatay, N.D., Datcu, M.: Bag-of-visual-words model for classification of interferometric SAR images. In: Proceedings of EUSAR 2016: 11th European Conference on Synthetic Aperture Radar, pp. 1–4. VDE (2016)Google Scholar
  9. 9.
    Zhao, L.J., Tang, P., Huo, L.Z.: Land-use scene classification using a concentric circle-structured multiscale bag-of-visual-words model. IEEE J. Sel. Top. Appl. Earth Obser. Remote Sens. 7(12), 4620–4631 (2014)Google Scholar
  10. 10.
    Georgescu, F.A., Vaduva, C., Raducanu, D., Datcu, M.: Feature extraction for patch-based classification of multispectral earth observation images. IEEE Geosci. Remote Sens. Lett. 13(6), 865–869 (2016)Google Scholar
  11. 11.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2016)Google Scholar
  12. 12.
    Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279. ACM (2010)Google Scholar
  13. 13.
    Yang, Y., Newsam, S.: Spatial pyramid co-occurrence for image classification. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1465–1472. IEEE (2011)Google Scholar
  14. 14.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 2169–2178. IEEE (2006)Google Scholar
  15. 15.
    Haralick, R.M., Shanmugam, K., et al.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. 3(6), 610–621 (1973)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Nengyuan Liu
    • 1
  • Chuan Lu
    • 2
  • Ming Zhang
    • 2
  • Zongyong Cui
    • 1
  • Zongjie Cao
    • 1
  • Rui Min
    • 1
  1. 1.Center for Information GeoscienceUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.Research Institute in ChengduUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations