ICONIP 2007: Neural Information Processing pp 940-949 | Cite as

Incremental Knowledge Representation Based on Visual Selective Attention

  • Minho Lee
  • Sang-Woo Ban
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4985)

Abstract

Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A bottom-up attention model can selectively decide salient object areas. Color and form features for a selected object are generated by a sparse coding mechanism by a convolution neural network (CNN). Using the extracted features by the CNN as inputs, the incremental knowledge representation model, called the growing fuzzy topology adaptive resonant theory (TART) network, makes clusters for the construction of an ontology map in the color and form domains. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

Keywords

Incremental Knowledge Representation Visual Selective Attention Stereo Saliency Map Incremental Object Perception 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breazeal, C.: Imitation as Social Exchange between Humans and Robots. In: Proceedings of the 1999 Symposium on Imitation in Animals and Artifacts (AISB 1999), Edinburg, Scotland, pp. 96–104 (1999)Google Scholar
  2. 2.
    Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M., Thelen, E.: Autonomous Mental Development by Robots and Animals. Science 291, 599–600 (2000)CrossRefGoogle Scholar
  3. 3.
    Scassellati, B.: Investigating models of social development using a humanoid robot. In: Proceedings of International Joint Conference on Neural Networks, pp. 2704–2709 (2003)Google Scholar
  4. 4.
    Metta, G., Fitzpatrick, P.: Early integration of vision and manipulation. In: Proceedings of the International Joint Conference on Neural Networks, pp. 2703 (2003)Google Scholar
  5. 5.
    Treisman, A.M., Gelde, G.: A feature-integrations theory of attention. Cognitive Psychology 12(1), 97–136 (1980)CrossRefGoogle Scholar
  6. 6.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Patt. Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  7. 7.
    Navalpakkam, V., Itti, L.: A goal oriented attention guidance model. Biologically Motivated Computer Vision, 453–461 (2002)Google Scholar
  8. 8.
    Walther, D., Itti, L., Riesenhuber, M., Poggio, T., Koch, C.: Attentional selection for object recognition – a gentle way. In: Bülthoff, H.H., Lee, S.-W., Poggio, T.A., Wallraven, C. (eds.) BMCV 2002. LNCS, vol. 2525, pp. 472–479. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Tsotsos, J.K., et al.: Modelling visual attention via selective tuning. Artificial Intelligence 78, 507–545 (1995)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Sun, Y., Fisher, R.: Hierarchical Selectivity for Object -Based Visual Attention. In: Bülthoff, H.H., Lee, S.-W., Poggio, T.A., Wallraven, C. (eds.) BMCV 2002. LNCS, vol. 2525, pp. 427–438. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Fei-Fei, L.: Knowledge transfer in learning to recognize visual objects classes. In: ICDL (2006)Google Scholar
  12. 12.
    Choi, S.B., Jung, B.S., Ban, S.W., Niitsuma, H., Lee, M.: Biologically motivated vergence control system using human-like selective attention model. Neurocomputing 69, 537–558 (2006)CrossRefGoogle Scholar
  13. 13.
    Barlow, H.B., Tolhust, D.J.: Why do you have edge detection? Optical Society of America Technical Digest 23, 172 (1992)Google Scholar
  14. 14.
    Bell, J., Sejnowski, T.J.: The independent components of natural scenes are edge filters. Vision Research 37, 3327–3338 (1997)CrossRefGoogle Scholar
  15. 15.
    Lawrence, S., Giles, C.L., Tsoi, A.C., Back, A.D.: Face recognition: A conventional neural network approach. IEEE Trans. on Neural Networks, 98–113 (1997)Google Scholar
  16. 16.
    Wolberg, G., Zokai, S.: Robust image registration using log-polar transform. In: Proc. IEEE Intl. Conference on Image Processing, Canada (2000)Google Scholar
  17. 17.
    Carpenter, G.A., Grossberg, S., Makuzon, N., Reynolds, J.H., Rosen, D.B.: Fuzzy ARTMAP: A Neural Network Architecture for incremental supervised learning of analog multidimensional maps. IEEE Transactions on Neural Networks 3(5), 698–713 (1992)CrossRefGoogle Scholar
  18. 18.
    Marsland, S., Shapiro, J., Nehmzow, U.: A self- organising network that grows when required. Neural Networks, Special Issue 15(8-9), 1041–1058 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Minho Lee
    • 1
  • Sang-Woo Ban
    • 2
  1. 1.School of Electrical Engineering and Computer ScienceKyungpook National UniversityTaeguKorea
  2. 2.Dept. of Information and Communication EngineeringDongguk UniversityGyeongbukKorea

Personalised recommendations