Advertisement

Learning by Integrating Information Within and Across Fixations

  • Predrag Neskovic
  • Liang Wu
  • Leon N Cooper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4132)

Abstract

In this work we introduce a Bayesian Integrate And Shift (BIAS) model for learning object categories. The model is biologically inspired and uses Bayesian inference to integrate information within and across fixations. In our model, an object is represented as a collection of features arranged at specific locations with respect to the location of the fixation point. Even though the number of feature detectors that we use is large, we show that learning does not require a large amount of training data due to the fact that between an object and features we introduce an intermediate representation, object views, and thus reduce the dependence among the feature detectors. We tested the system on four object categories and demonstrated that it can learn a new category from only a few training examples.

Keywords

Object Recognition Feature Detector Object Category Integrate Information Intermediate Representation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agarwal, S., Awan, A., Roth, D.: Learning to detect objects in images via a sparse, part-based representation. PAMI 26(11), 1475–1490 (2004)Google Scholar
  2. 2.
    Biederman, I.: Recognition-by-components: A theory of human image understanding. Psychological Review 94, 115–147 (1987)CrossRefGoogle Scholar
  3. 3.
    Fei-Fei, L., Fergus, R., Perona, P.: A bayesian approach to unsupervised learning of object categories. In: Proc. ICCV (2003)Google Scholar
  4. 4.
    Keller, J., Rogers, S., Kabrisky, M., Oxley, M.: Object recognition based on human saccadic behaviour. Pattern Analysis and Applications 2, 251–263 (1999)CrossRefGoogle Scholar
  5. 5.
    Lowe, D.: Object recognition from local scale-invariant features. In: Proc. ICCV (1999)Google Scholar
  6. 6.
    Mel, B.: Seemore: Combining color, shape and texture histogramming in a neurallyinspired approach to visual object recognition. Neural Comp. 9(4), 777–804 (1997)CrossRefGoogle Scholar
  7. 7.
    Neskovic, P., Davis, P., Cooper, L.: Interactive parts model: an application to recognition of on-line cursive script. In: Proc. NIPS (2000)Google Scholar
  8. 8.
    Noton, D., Stark, L.: Scanpaths in eye movements during pattern perception. Science 171, 308–311 (1971)CrossRefGoogle Scholar
  9. 9.
    Riesenhuber, M., Poggio, T.: Hierarchical models of object recognition in cortex. Nat. Neurosci. 2(11), 1019–1025 (1999)CrossRefGoogle Scholar
  10. 10.
    Rybak, I.A., Gusakova, V.I., Golovan, A., Podladchikova, L.N., Shevtsova, N.A.: A model of attention-guided visual perception and recognition. Vision Research 38, 2387–2400 (1998)CrossRefGoogle Scholar
  11. 11.
    Schneiderman, H., Kanade, T.: A statistical method for 3d object detection applied to faces and cars. In: Proc. CVPR (2000)Google Scholar
  12. 12.
    Serre, T., Wolf, L., Poggio, T.: Object recognition with features inspired by visual cortex. In: Proc. CVPR (2005)Google Scholar
  13. 13.
    Torralba, A., Murphy, K.P., Freeman, W.T.: Sharing features: efficient boosting procedures for multiclass object detection. In: Proc. CVPR (2004)Google Scholar
  14. 14.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proc. CVPR (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Predrag Neskovic
    • 1
  • Liang Wu
    • 1
  • Leon N Cooper
    • 1
  1. 1.Institute for Brain and Neural Systems and Department of PhysicsBrown UniversityProvidenceUSA

Personalised recommendations