Advertisement

Geospatial Target Detection from High-Resolution Remote-Sensing Images Based on PIIFD Descriptor and Salient Regions

  • Fariborz GhorbaniEmail author
  • Hamid Ebadi
  • Amin Sedaghat
Research Article
  • 31 Downloads

Abstract

Geospatial target detection from visible remote-sensing images is considered as one of the most important issues in the analysis of aerial and satellite imagery. Development of remote-sensing techniques and enhancing resolution of images provide an opportunity to advance automatic analysis. The proposed methods of geospatial target detection have faced to a variety of challenges. Recently, local features are extensively used which play a very effective role in dealing with these issues. High intensity variations between targets and backgrounds in different images are the most critical challenges. Most proposed local feature descriptors are not able to deal with this amount of intensity variations and are accompanied with errors when facing them. In this paper, PIIFD descriptor has been applied to cope with intense intensity variations, as this descriptor is symmetrical against contrast. The proposed framework to automatically detect geospatial targets includes a supervised approach based on local features extraction and description and consists of three main steps including training, image searching, and geospatial target detection. In the training step, local features are extracted by UR-SIFT algorithm that properly matches with remote-sensing images and are described by the PIIFD descriptor. Due to the large dimensions of the extracted features, the SABOVW model has been used for quantization purpose. This model uses soft assignment of features to codebook, and the presentation provided by this model is used to train SVM classifier. In the second step, for the sake of computational efficiency, the salient regions of the image are detected by combining the saliency models, which reduces the image space to search the geospatial targets. In the third step, the salient regions are scanned by the sliding window approach and a descriptor of each position is generated. Eventually, the process of detection of geospatial targets will be carried out by applying the trained SVM model to each window. In order to evaluate the efficiency of the PIIFD descriptor, its performance is compared with descriptors such as SIFT, DAISY, LSS, and LBP. The results showed better performance of the PIIFD descriptor in detection of geospatial targets.

Keywords

Airplane targets Local features Descriptor Detector Saliency models SABOVW model 

References

  1. Bishop, C. M. (2006). Pattern recognition and machine learning (p. 128). New York: Springer.Google Scholar
  2. Borji, A., Sihite, D. N., & Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1), 55–69.CrossRefGoogle Scholar
  3. Chen, J., et al. (2010). A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Transactions on Biomedical Engineering, 57(7), 1707–1718.CrossRefGoogle Scholar
  4. Cheng, G., & Han, J. (2016). A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11–28.CrossRefGoogle Scholar
  5. Cheng, G., Han, J., Zhou, P., & Guo, L. (2014). Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS Journal of Photogrammetry and Remote Sensing 14(Classification), 98, 119–132.CrossRefGoogle Scholar
  6. Fu, K., Gu, I. Y.-H., & Yang, J. (2018). Spectral salient object detection. Neurocomputing, 275, 788–803.CrossRefGoogle Scholar
  7. Garcia-Diaz, A., et al. (2012a). Saliency from hierarchical adaptation through decorrelation and variance normalization. Image and Vision Computing, 30(1), 51–64.CrossRefGoogle Scholar
  8. Garcia-Diaz, A., et al. (2012b). On the relationship between optical variability, visual saliency, and eye fixations: A computational approach. Journal of Vision, 12(6), 17.CrossRefGoogle Scholar
  9. Han, J., Zhou, P., Zhang, D., Cheng, G., Guo, L., Liu, Z., et al. (2014). Efficient, simultaneous detection of multi-class geospatial targets based on visual saliency modeling and discriminative learning of sparse coding. ISPRS Journal of Photogrammetry and Remote Sensing (12), 89, 37–48.CrossRefGoogle Scholar
  10. Hay, G. J., et al. (2005). An automated object-based approach for the multiscale image segmentation of forest scenes. International Journal of Applied Earth Observation and Geoinformation, 7(4), 339–359.CrossRefGoogle Scholar
  11. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.CrossRefGoogle Scholar
  12. Kelman, A., Sofka, M., & Stewart, C.V. (2007). Keypoint descriptors for matching across multiple image modalities and non-linear intensity variations. In IEEE conference on computer vision and pattern recognition, CVPR’07, IEEE.Google Scholar
  13. Lei, Z., et al. (2012). Rotation-invariant object detection of remotely sensed images based on texton forest and hough voting. IEEE Transactions on Geoscience and Remote Sensing, 50(4), 1206–1217.CrossRefGoogle Scholar
  14. López-García, F., et al. (2011). Scene recognition through visual attention and image features: A comparison between SIFT and SURF approaches. Rijeka: INTECH Open Access Publisher.Google Scholar
  15. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRefGoogle Scholar
  16. Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.Google Scholar
  17. Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971–987.CrossRefGoogle Scholar
  18. Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 62–66.CrossRefGoogle Scholar
  19. Philbin, J., et al. (2008). Lost in quantization: Improving particular object retrieval in large scale image databases. In IEEE conference on computer vision and pattern recognition, CVPR 2008. IEEE.Google Scholar
  20. Sedaghat, A. (2010). An optimised automatic HR image registration methodology based on the integration of advanced area based and feature based methods.Google Scholar
  21. Sedaghat, A., Ebadi, H., & Mokhtarzade, M. (2012). Image matching of satellite data based on quadrilateral control networks. The Photogrammetric Record, 27(140), 423–442.CrossRefGoogle Scholar
  22. Sedaghat, A., Mokhtarzade, M., & Ebadi, H. (2011). Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 49(11), 4516–4527.CrossRefGoogle Scholar
  23. Shechtman, E., & Irani, M. (2007). Matching local self-similarities across images and videos. In IEEE conference on computer vision and pattern recognition, CVPR’07. IEEE.Google Scholar
  24. Sun, H., et al. (2012). Automatic target detection in high-resolution remote sensing images using spatial sparse coding bag-of-words model. IEEE Geoscience and Remote Sensing Letters, 9(1), 109–113.CrossRefGoogle Scholar
  25. Tao, C., et al. (2011). Airport detection from large IKONOS images using clustered SIFT keypoints and region information. IEEE Geoscience and Remote Sensing Letters, 8(1), 128–132.CrossRefGoogle Scholar
  26. Tola, E., Lepetit, V., & Fua, P. (2010). Daisy: An efficient dense descriptor applied to wide-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5), 815–830.CrossRefGoogle Scholar
  27. Tuytelaars, T., & Mikolajczyk, K. (2007). Local invariant feature detectors: A survey. Foundations and Trends in Computer Graphics Vision, 3(3), 177–280.CrossRefGoogle Scholar
  28. Weber, J., & Lefèvre, S. (2012). Spatial and spectral morphological template matching. Image and Vision Computing, 30(12), 934–945.CrossRefGoogle Scholar
  29. Weidner, U., & Förstner, W. (1995). Towards automatic building extraction from high-resolution digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 50(4), 38–49.CrossRefGoogle Scholar
  30. Xu, S., et al. (2010). Object classification of aerial images with bag-of-visual words. IEEE Geoscience and Remote Sensing Letters, 7(2), 366–370.CrossRefGoogle Scholar
  31. Yang, Y, & Newsam, S. (2010). Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. ACM.Google Scholar
  32. Yang, Y., & Newsam, S. (2013). Geographic image retrieval using local invariant features. IEEE Transactions on Geoscience and Remote Sensing, 51(2), 818–832.CrossRefGoogle Scholar
  33. Yao, X., et al. (2015). A coarse-to-fine model for airport detection from remote sensing images using target-oriented visual saliency and CRF. Neurocomputing, 164, 162–172.CrossRefGoogle Scholar
  34. Zhang, W., et al. (2015). A generic discriminative part-based model for geospatial object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 99, 30–44.CrossRefGoogle Scholar
  35. Zhang, L., et al. (2016). Region-of-interest extraction based on spectrum saliency analysis and coherence-enhancing diffusion model in remote sensing images. Neurocomputing, 207, 630–644.CrossRefGoogle Scholar
  36. Zhang, Q., et al. (2017). Salient object detection via color and texture cues. Neurocomputing, 243, 35–48.CrossRefGoogle Scholar

Copyright information

© Indian Society of Remote Sensing 2019

Authors and Affiliations

  1. 1.Department of Photogrammetry and Remote Sensing, Faculty of Geodesy and Geomatics EngineeringK.N. Toosi University of TechnologyTehranIran
  2. 2.Department of Geomatics Engineering, Faculty of Civil EngineeringUniversity of TabrizTabrizIran

Personalised recommendations