Advertisement

Symmetry Enhanced Adaboost

  • Florian Baumann
  • Katharina Ernst
  • Arne Ehlers
  • Bodo Rosenhahn
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6453)

Abstract

This paper describes a method to minimize the immense training time of the conventional Adaboost learning algorithm in object detection by reducing the sampling area. A new algorithm with respect to the geometric and accordingly the symmetric relations of the analyzed object is presented. Symmetry enhanced Adaboost (SEAdaboost) can limit the scanning area enormously, depending on the degree of the objects symmetry, while it maintains the detection rate. SEAdaboost allows to take advantage of the symmetric characteristics of an object by concentrating on corresponding symmetry features during the detection of weak classifiers. In our experiments we gain 39% reduced training time (in average) with slightly increasing detection rates (up to 2.4% and up to 6% depending on the object class) compared to the conventional Adaboost algorithm.

Keywords

Detection Rate Solder Joint Object Detection Training Time Object Class 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57, 137–154 (2004)CrossRefGoogle Scholar
  2. 2.
    Freund, Y., Schapire, R.E.: A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence 14, 771–780 (1999)Google Scholar
  3. 3.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference, pp. 148–156 (1996)Google Scholar
  4. 4.
    Loy, G., Eklundh, J.O.: Detecting symmetry and symmetric constellations of features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 508–521. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Kearns, M.: Thoughts on hypothesis boosting. Unpublished manuscript (1988)Google Scholar
  6. 6.
    Kearns, M., Vazirani, U.V.: An introduction to computational learning theory. MIT Press, Cambridge (1994)Google Scholar
  7. 7.
    Warmuth, M.K., Glocer, K.A., Vishwanathan, S.: Entropy regularized lpboost. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds.) ALT 2008. LNCS (LNAI), vol. 5254, pp. 256–271. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    Warmuth, M.K., Glocer, K., Raetsch, G.: Boosting algorithms for maximizing the soft margin. In: Advances in Neural Information Processing Systems, vol. 20, pp. 1585–1592. MIT Press, Cambridge (2008)Google Scholar
  9. 9.
    Viola, P., Platt, J.C., Zhang, C.: Multiple instance boosting for object detection. In: Advances in Neural Information Processing, vol. 18, pp. 1417–1426 (2007)Google Scholar
  10. 10.
    Li, S.Z., Zhang, Z., Shum, H.Y., Zhang, H.: Floatboost learning for classification. In: Advances in Neural Information Processing Systems. Microsoft Research Asia (2002)Google Scholar
  11. 11.
    Jiang, J.L., Loe, K.F.: S-adaboost and pattern detection in complex environment. In: Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2003), pp. 413–418 (2003)Google Scholar
  12. 12.
    Crowther, P.S., Cox, R.J.: A method for optimal division of data sets for use in neural networks. In: Khosla, R., Howlett, R.J., Jain, L.C. (eds.) KES 2005. LNCS (LNAI), vol. 3684, pp. 1–7. Springer, Heidelberg (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Florian Baumann
    • 1
  • Katharina Ernst
    • 1
  • Arne Ehlers
    • 1
  • Bodo Rosenhahn
    • 1
  1. 1.Institut für InformationsverarbeitungLeibniz Universität HannoverHannoverGermany

Personalised recommendations