Star: A Contextual Description of Superpixels for Remote Sensing Image Classification

  • Tiago M. H. C. Santana
  • Alexei M. C. Machado
  • Arnaldo de A. Araújo
  • Jefersson A. dos Santos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10125)


Remote Sensing Images are one of the main sources of information about the earth surface. They are widely used to automatically generate thematic maps that show the land cover of an area. This process is traditionally done by using supervised classifiers which learn patterns extracted from the image pixels annotated by the user and then assign a label to the remaining pixels. However, due to the increasing spatial resolution of the images resulting from advances in the acquisition technology, pixelwise classification is not suitable anymore, even when combined with context. Therefore, we propose a new descriptor for superpixels called Star descriptor that creates a representation based on both its own visual cues and context. Unlike the most methods in the literature, the new approach does not require any prior classification to aggregate context. Experiments carried out on urban images showed the effectiveness of the Star descriptor to generate land cover thematic maps.


Remote sensing Thematic maps Land cover Contextual descriptor 



This work was partially financed by CNPq, CAPES, and Fapemig. The authors would like to thank Telops Inc. (Québec, Canada) for acquiring and providing the data used in this study, the IEEE GRSS Image Analysis and Data Fusion Technical Committee and Dr. Michal Shimoni (Signal and Image Centre, Royal Military Academy, Belgium) for organizing the 2014 Data Fusion Contest, the Centre de Recherche Public Gabriel Lippmann (CRPGL, Luxembourg) and Dr. Martin Schlerf (CRPGL) for their contribution of the Hyper-Cam LWIR sensor, and Dr. Michaela De Martino (University of Genoa, Italy) for her contribution to data preparation.


  1. 1.
    Wilkinson, G.G.: Results and implications of a study of fifteen years of satellite image classification experiments. IEEE Trans. Geosci. Remote 43(3), 433–440 (2005)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Achanta, R., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels. Technical report 149300, EPFL, June 2010Google Scholar
  3. 3.
    Vargas, J.E., Falcão, A.X., dos Santos, J.A., Esquerdo, J.C.D.M., Coutinho, A.C., Antunes, J.F.G.: Contextual superpixel description for remote sensing image classification. In: International Geoscience and Remote Sensing Symposium. IEEE (2015)Google Scholar
  4. 4.
    Blaschke, T., Hay, G.J., Kelly, M., Lang, S., Hofmann, P., Addink, E., Feitosa, R.Q., van der Meer, F., van der Werff, H., van Coillie, F., Tiede, D.: Geographic object-based image analysis towards a new paradigm. ISPRS J. Photogramm. 87, 180–191 (2014)CrossRefGoogle Scholar
  5. 5.
    Galleguillos, C., Belongie, S.: Context based object categorization: a critical survey. Comput. Vis. Image Underst. 114(6), 712–722 (2010)CrossRefGoogle Scholar
  6. 6.
    Hanson, A.R., Riseman, E.M.: VISIONS: a computer system for interpreting scenes. In: Hanson, A.R., Riseman, E.M. (eds.) Computer Vision Systems. Academic Press, New York (1978)Google Scholar
  7. 7.
    Strat, T.M., Fischler, M.A.: Context-based vision: recognizing objects using information from both 2D and 3D imagery. IEEE Trans. Pattern Anal. Mach. Intell. 13(10), 1050–1065 (1991)CrossRefGoogle Scholar
  8. 8.
    Fischler, M.A., Elschlager, R.: The representation and matching of pictorial structures. IEEE Trans. Comput. C–22(1), 67–92 (1973)CrossRefGoogle Scholar
  9. 9.
    Rabinovich, A., Vedaldi, A., Galleguillos, C., Wiewiora, E., Belongie, S.: Objects in context. In: IEEE 11th International Conference on Computer Vision, pp. 1–8, October 2007Google Scholar
  10. 10.
    Torralba, A.: Contextual priming for object detection. Int. J. Comput. Vis. 53(2), 169–191 (2003)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context. Int. J. Comput. Vis. 81(1), 2–23 (2009)CrossRefGoogle Scholar
  12. 12.
    Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: Proceedings of the CVPR IEEE, pp. 891–898, June 2014Google Scholar
  13. 13.
    Lim, J.J., Arbelaez, P., Arbelez, P., Gu, C., Malik, J.: Context by region ancestry. In: IEEE International Conference on Computer Vision, pp. 1978–1985, September 2009Google Scholar
  14. 14.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRefGoogle Scholar
  15. 15.
    Silva, F.B., Goldenstein, S., Tabbone, S., Torres, R. da S.: Image classification based on bag of visual graphs. In: IEEE International Conference on Image Processing, pp. 4312–4316, September 2013Google Scholar
  16. 16.
  17. 17.
    dos Santos, J.A., Penatti, O.A.B., Torres, R. da S.: Evaluating the potential of texture and color descriptors for remote sensing image retrieval and classification. In: VISAPP, Angers, France, May 2010Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Tiago M. H. C. Santana
    • 1
  • Alexei M. C. Machado
    • 2
  • Arnaldo de A. Araújo
    • 1
  • Jefersson A. dos Santos
    • 1
  1. 1.Department of Computer ScienceUniversidade Federal de Minas Gerais (UFMG)Belo HorizonteBrazil
  2. 2.Department of Computer SciencePontifícia Universidade Católica de Minas GeraisBelo HorizonteBrazil

Personalised recommendations