Improved Boosting Performance by Explicit Handling of Ambiguous Positive Examples

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 318)

Abstract

Visual classes naturally have ambiguous examples, that are different depending on feature and classifier and are hard to disambiguate from surrounding negatives without overfitting. Boosting in particular tends to overfit to such hard and ambiguous examples, due to its flexibility and typically aggressive loss functions. We propose a two-pass learning method for identifying ambiguous examples and relearning, either subjecting them to an exclusion function or using them in a later stage of an inverted cascade. We provide an experimental comparison of different boosting algorithms on the VOC2007 dataset, training them with and without our proposed extension. Using our exclusion extension improves the performance of almost all of the tested boosting algorithms, without adding any additional test-time cost. Our proposed inverted cascade adds some test-time cost but gives additional improvements in performance. Our results also suggest that outlier exclusion is complementary to positive jittering and hard negative mining.

Keywords

Boosting Image classification Algorithm evaluation Dataset pruning VOC2007 

Notes

Acknowledgments

This work has been funded by the Swedish Foundation for Strategic Research (SSF); within the project VINST

References

  1. 1.
    Felzenszwalb, P., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. PAMI (2010)Google Scholar
  2. 2.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)Google Scholar
  3. 3.
    Laptev, I.: Improving object detection with boosted histograms. IVC (2009)Google Scholar
  4. 4.
    Kumar, M.P., Zisserman, A., Torr, P.H.S.: Efficient discriminative learning of parts-based models. In: ICCV (2009)Google Scholar
  5. 5.
    Long, P.M., Servedio, R.A.: Random classification noise defeats all convex potential boosters. In: ICML (2008)Google Scholar
  6. 6.
    Masnadi-Shirazi, H., Vasconcelos, N.: On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In: NIPS (2008)Google Scholar
  7. 7.
    Leistner, C., Saffari, A., Roth, P.M., Bischof, H.: On robustness of on-line boosting—a competitive study. In: ICCV Workshops (2009)Google Scholar
  8. 8.
    Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting, and variants. MLJ (1999)Google Scholar
  9. 9.
    Dietterich, T.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. MLJ (2000)Google Scholar
  10. 10.
    Freund, Y., Science, C.: A more robust boosting algorithm. arXiv:0905.2138 (2009)
  11. 11.
    Freund, Y.: An adaptive version of the boost by majority algorithm. In: COLT (1999)Google Scholar
  12. 12.
    Masnadi-Shirazi, H., Mahadevan, V., Vasconcelos, N.: On the design of robust classifiers for computer vision. In: CVPR (2010)Google Scholar
  13. 13.
    Grove, A., Schuurmans, D.: Boosting in the limit: maximizing the margin of learned ensembles. In: AAAI (1998)Google Scholar
  14. 14.
    Warmuth, M., Glocer, K., Rätsch, G.: Boosting algorithms for maximizing the soft margin. In: NIPS (2008)Google Scholar
  15. 15.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. AOS (2000)Google Scholar
  16. 16.
    Vijayanarasimhan, S.: Large-scale live active learning: training object detectors with crawled data and crowds. In: CVPR (2011)Google Scholar
  17. 17.
    Zhu, X., Vondrick, C., Ramanan, D., Fowlkes, C.C.: Do we need more training data or better models for object detection? In: BMVC (2012)Google Scholar
  18. 18.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR (2011)Google Scholar
  19. 19.
    Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. PAMI (2008)Google Scholar
  20. 20.
    Hays, J., Efros, A.: Scene completion using millions of photographs. TOG (2007)Google Scholar
  21. 21.
    Freund, Y.: Boosting a weak learning algorithm by majority. IANDC (1995)Google Scholar
  22. 22.
    Rätsch, G., Onoda, T., Müller, K.: Soft margins for AdaBoost. MLJ (2001)Google Scholar
  23. 23.
    Vezhnevets, A., Barinova, O.: Avoiding boosting overfitting by removing confusing samples. In: ECML (2007)Google Scholar
  24. 24.
    Angelova, A., Abu-Mostafam, Y., Perona, P.: Pruning training sets for learning of object categories. In: CVPR (2005)Google Scholar
  25. 25.
    Viola, P., Platt, J.: Multiple instance boosting for object detection. In: NIPS (2006)Google Scholar
  26. 26.
    Kumar, M.P., Packer, B.: Self-paced learning for latent variable models. In: NIPS (2010)Google Scholar
  27. 27.
    Friedman, J.: Greedy function approximation: a gradient machine. AOS (2001)Google Scholar
  28. 28.
    Mason, L., Baxter, J., Bartlett, P., Frean, M.: Boosting algorithms as gradient descent in function space. In: NIPS (1999)Google Scholar
  29. 29.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. In: COLT (1995)Google Scholar
  30. 30.
    Fan, R., Chang, K., Hsieh, C., Wang, X., Lin, C.: LIBLINEAR: a library for large linear classification. JMLR (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Computer Vision and Active PerceptionKTHStockholmSweden

Personalised recommendations