Unsupervised Scene Classification Based on Context of Features for a Mobile Robot

  • Hirokazu Madokoro
  • Yuya Utsumi
  • Kazuhito Sato
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6881)

Abstract

This paper presents an unsupervised scene classification method based on the context of features for semantic recognition of indoor scenes used for an autonomous mobile robot. Our method creates Visual Words (VWs) of two types using Scale-Invariant Feature Transform (SIFT) and Gist. Using the combination of VWs, our method creates Bags of VWs (BoVWs) to vote for a two-dimensional histogram as context-based features. Moreover, our method generates labels as a candidate of categories while maintaining stability and plasticity together using the incremental learning function of Adaptive Resonance Theory-2 (ART-2). Our method actualizes unsupervised-learning-based scene classification using generated labels of ART-2 as teaching signals of Counter Propagation Networks (CPNs). The spatial and topological relations among scenes are mapped on the category map of CPNs. The relations of classified scenes that include categories are visualized on the category map. The experiment demonstrates the classification accuracy of semantic categories such as office rooms and corridors using an open dataset as an evaluation platform of position estimation and navigation for an autonomous mobile robot.

Keywords

Mobile Robot Autonomous Mobile Robot Teaching Signal Indoor Scene Kohonen Layer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dissanayake, G., Newman, P., Clark, S., Durrant-Whyte, H.F., Csorba, M.: An experimental and theoretical investigation into simultaneous localization and map building (SLAM). In: Experimental Robotics VI. LNCIS. Springer, Heidelberg (2000)Google Scholar
  2. 2.
    Wu, J., Rehg, J.M.: CENTRIST: A Visual Descriptor for Scene Categorization. IEEE Trans. Pattern Analysis and Machine Intelligence (2010)Google Scholar
  3. 3.
    Wu, J., Christensen, H.I., Rehg, J.M.: Visual Place Categorization: Problem, Dataset, and Algorithm. In: Proc. IEEE/RSJ Int’l. Conf. Intelligent Robots and Systems (2009)Google Scholar
  4. 4.
    Siagain, C., Itti, L.: Rapid Biologically Inspired Scene Classification Using Features Shared with Visual Attention. IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 300–312 (2007)CrossRefGoogle Scholar
  5. 5.
    Quattoni, A., Torralba, A.: Recognizing Indoor Scenes. In: Proc. Computer Vision and Pattern Recognition (2009)Google Scholar
  6. 6.
    Tsukada, M., Utsumi, Y., Madokoro, H., Sato, K.: Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot. IEICE Trans. Inf. & Sys. E94-D(1), 127–136 (2011)CrossRefGoogle Scholar
  7. 7.
    Oliva, A., Torralba, A.: Building the gist of a scene: the role of global image features in recognition. In: Visual Perception, Progress in Brain Research, vol. 155 (2006)Google Scholar
  8. 8.
    Torralba, A., Murphy, K.P., Freeman, W.T., Rubin, M.A.: Context-Based Vision System for Place and Object Recognition. In: Proc. IEEE Int’l. Conf. Computer Vision, pp. 1023–1029 (October 2003)Google Scholar
  9. 9.
    Yanai, K.: The Current State and Future Directions on Generic Object Recognition. IPSJ SIG Notes CVIM, 121–134 (September 2006)Google Scholar
  10. 10.
    Lowe, D.G.: Object Recognition from Local Scale-Invariant Features. In: Proc. IEEE International Conference on Computer Vision, pp. 1150–1157 (1999)Google Scholar
  11. 11.
    Torralba, A.: How many pixels make an image? Visual Neuroscience 26, 123–131 (2009)CrossRefGoogle Scholar
  12. 12.
    Takeuchi, T.: Underlying Mechanisms of Scene Recognition and Visual Search. ITE Technical Report 33(24), 7–14 (2009)Google Scholar
  13. 13.
    Nagahashi, T., Ihara, A., Fujiyoshi, H.: Tendency of Image Local Features that are Effective for Discrimination by using Bag-of-Features in Object Category Recognition. IPSJ SIG Notes DVIM (3), 13–20 (2009)Google Scholar
  14. 14.
    Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences (1995)Google Scholar
  15. 15.
    Carpenter, G.A., Grossberg, S.: ART 2: Stable Self-Organization of Pattern Recognition Codes for Analog Input Patterns. Applied Optics 26, 4919–4930 (1987)CrossRefGoogle Scholar
  16. 16.
    Hetch-Nielsen, R.: Counterpropagation networks. In: Proc. of IEEE First Int’l. Conf. on Neural Networks (1987)Google Scholar
  17. 17.
    Luo, J., Pronobis, A., Caputo, B., Jensfelt, P.: The KTHIDOL2 database. Technical Report CVAP304, Kungliga Tekniska Hoegskolan, CVAP/CAS (October 2006)Google Scholar
  18. 18.
    Pronobis, A., Xing, L., Caputo, B.: Overview of the CLEF 2009 Robot Vision Track. In: Peters, C., Caputo, B., Gonzalo, J., Jones, G.J.F., Kalpathy-Cramer, J., Müller, H., Tsikrika, T. (eds.) CLEF 2009. LNCS, vol. 6242, pp. 110–119. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Hirokazu Madokoro
    • 1
  • Yuya Utsumi
    • 1
  • Kazuhito Sato
    • 1
  1. 1.Department of Machine Intelligence and Systems EngineeringAkita Prefectural UniversityYurihonjo CityJapan

Personalised recommendations