Advertisement

Visual Classification of Images by Learning Geometric Appearances Through Boosting

  • Martin Antenreiter
  • Christian Savu-Krohn
  • Peter Auer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4087)

Abstract

We present a multiclass classification system for gray value images through boosting. The feature selection is done using the LPBoost algorithm which selects suitable features of adequate type. In our experiments we use up to nine different kinds of feature types simultaneously. Furthermore, a greedy search strategy within the weak learner is used to find simple geometric relations between selected features from previous boosting rounds. The final hypothesis can also consist of more than one geometric model for an object class. Finally, we provide a weight optimization method for combining the learned one-vs-one classifiers for the multiclass classification. We tested our approach on a publicly available data set and compared our results to other state-of-the-art approaches, such as the ”bag of keypoints” method.

References

  1. 1.
    Bennett, K.P., Demiriz, A., Shawe-Taylor, J.: A column generation algorithm for boosting. In: Proc. 17th International Conf. on Machine Learning, pp. 65–72. Morgan Kaufmann, San Francisco (2000)Google Scholar
  2. 2.
    Burl, M.C., Leung, T.K., Perona, P.: Recognition of planar object classes. In: Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR 1996), pp. 223–230. IEEE Computer Society Press, Los Alamitos (1996)CrossRefGoogle Scholar
  3. 3.
    Burl, M.C., Weber, M., Perona, P.: A probabilistic approach to object recognition using local photometry and global geometry. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 628–641. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  4. 4.
    Crandall, D., Felzenszwalb, P.F., Huttenlocher, D.P.: Spatial priors for part-based recognition using statistical models. In: CVPR (1), pp. 10–17. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  5. 5.
    Csurka, G., Bray, C., Dance, C., Fan, L.: Visual categorization with bags of keypoints. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, Springer, Heidelberg (2004)Google Scholar
  6. 6.
    Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Machine Learning 46(1-3), 225–254 (2002)zbMATHCrossRefGoogle Scholar
  7. 7.
    Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2003)Google Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: European Conference on Computational Learning Theory, pp. 23–37 (1995)Google Scholar
  9. 9.
    Fussenegger, M., Opelt, A., Pinz, A., Auer, P.: Object recognition using segmentation for feature detection. In: ICPR (3), pp. 41–44 (2004)Google Scholar
  10. 10.
    Gonzalez, R., Woods, R.: Digital Image Processing. Addision-Wesley, Reading (1992)Google Scholar
  11. 11.
    Van Gool, L.J., Moons, T., Ungureanu, D.: Affine/photometric invariants for planar intensity patterns. In: Buxton, B.F., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1064, pp. 642–651. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  12. 12.
    Joachims, T.: Making large-scale support vector machine learning practical. In: Advances in kernel methods: support vector learning, pp. 169–184. MIT Press, Cambridge (1999)Google Scholar
  13. 13.
    Kadir, T., Brady, M.: Saliency, scale and image description. International Journal of Computer Vision 45(2), 83–105 (2001)zbMATHCrossRefGoogle Scholar
  14. 14.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: Seventh International Conference on Computer Vision, pp. 1150–1157 (1999)Google Scholar
  15. 15.
    Mikolajczyk, K., Schmid, C.: Indexing based on scale invariant interest points. In: International Conference on Computer Vision, pp. 525–531 (2001)Google Scholar
  16. 16.
    Opelt, A., Fussenegger, M., Pinz, A., Auer, P.: Weak Hypotheses and Boosting for Generic Object Detection and Recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022, pp. 71–84. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Opelt, A., Fussenegger, M., Pinz, A., Auer, P.: Generic object recognition with boosting. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 416–431 (2006)CrossRefGoogle Scholar
  18. 18.
    Rätsch, G., Schölkopf, B., Smola, A., Müller, K.-R., Onoda, T., Mika, S.: ν-Arc: Ensemble learning in the presence of outliers. In: Cohn, D.A., Kearns, M.S., Solla, S.A. (eds.) Advances in Neural Information Processing Systems 12, MIT Press, Cambridge (2000)Google Scholar
  19. 19.
    Vapnik, V.: The nature of statistical learning theory. Springer, New York (1995)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Martin Antenreiter
    • 1
  • Christian Savu-Krohn
    • 1
  • Peter Auer
    • 1
  1. 1.Chair of Information Technology (CiT)University of LeobenAustria

Personalised recommendations