Star: A Contextual Description of Superpixels for Remote Sensing Image Classification
Remote Sensing Images are one of the main sources of information about the earth surface. They are widely used to automatically generate thematic maps that show the land cover of an area. This process is traditionally done by using supervised classifiers which learn patterns extracted from the image pixels annotated by the user and then assign a label to the remaining pixels. However, due to the increasing spatial resolution of the images resulting from advances in the acquisition technology, pixelwise classification is not suitable anymore, even when combined with context. Therefore, we propose a new descriptor for superpixels called Star descriptor that creates a representation based on both its own visual cues and context. Unlike the most methods in the literature, the new approach does not require any prior classification to aggregate context. Experiments carried out on urban images showed the effectiveness of the Star descriptor to generate land cover thematic maps.
KeywordsRemote sensing Thematic maps Land cover Contextual descriptor
This work was partially financed by CNPq, CAPES, and Fapemig. The authors would like to thank Telops Inc. (Québec, Canada) for acquiring and providing the data used in this study, the IEEE GRSS Image Analysis and Data Fusion Technical Committee and Dr. Michal Shimoni (Signal and Image Centre, Royal Military Academy, Belgium) for organizing the 2014 Data Fusion Contest, the Centre de Recherche Public Gabriel Lippmann (CRPGL, Luxembourg) and Dr. Martin Schlerf (CRPGL) for their contribution of the Hyper-Cam LWIR sensor, and Dr. Michaela De Martino (University of Genoa, Italy) for her contribution to data preparation.
- 2.Achanta, R., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels. Technical report 149300, EPFL, June 2010Google Scholar
- 3.Vargas, J.E., Falcão, A.X., dos Santos, J.A., Esquerdo, J.C.D.M., Coutinho, A.C., Antunes, J.F.G.: Contextual superpixel description for remote sensing image classification. In: International Geoscience and Remote Sensing Symposium. IEEE (2015)Google Scholar
- 6.Hanson, A.R., Riseman, E.M.: VISIONS: a computer system for interpreting scenes. In: Hanson, A.R., Riseman, E.M. (eds.) Computer Vision Systems. Academic Press, New York (1978)Google Scholar
- 9.Rabinovich, A., Vedaldi, A., Galleguillos, C., Wiewiora, E., Belongie, S.: Objects in context. In: IEEE 11th International Conference on Computer Vision, pp. 1–8, October 2007Google Scholar
- 12.Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: Proceedings of the CVPR IEEE, pp. 891–898, June 2014Google Scholar
- 13.Lim, J.J., Arbelaez, P., Arbelez, P., Gu, C., Malik, J.: Context by region ancestry. In: IEEE International Conference on Computer Vision, pp. 1978–1985, September 2009Google Scholar
- 15.Silva, F.B., Goldenstein, S., Tabbone, S., Torres, R. da S.: Image classification based on bag of visual graphs. In: IEEE International Conference on Image Processing, pp. 4312–4316, September 2013Google Scholar
- 16.IEEE: GRSS data fusion contest (2014). http://www.grssieee.org/community/technical-committees/data-fusion/
- 17.dos Santos, J.A., Penatti, O.A.B., Torres, R. da S.: Evaluating the potential of texture and color descriptors for remote sensing image retrieval and classification. In: VISAPP, Angers, France, May 2010Google Scholar