Advertisement

International Journal of Computer Vision

, Volume 100, Issue 2, pp 203–215 | Cite as

Making a Shallow Network Deep: Conversion of a Boosting Classifier into a Decision Tree by Boolean Optimisation

  • Tae-Kyun Kim
  • Ignas Budvytis
  • Roberto Cipolla
Article
  • 543 Downloads

Abstract

This paper presents a novel way to speed up the evaluation time of a boosting classifier. We make a shallow (flat) network deep (hierarchical) by growing a tree from decision regions of a given boosting classifier. The tree provides many short paths for speeding up while preserving the reasonably smooth decision regions of the boosting classifier for good generalisation. For converting a boosting classifier into a decision tree, we formulate a Boolean optimisation problem, which has been previously studied for circuit design but limited to a small number of binary variables. In this work, a novel optimisation method is proposed for, firstly, several tens of variables i.e. weak-learners of a boosting classifier, and then any larger number of weak-learners by using a two-stage cascade. Experiments on the synthetic and face image data sets show that the obtained tree achieves a significant speed up both over a standard boosting classifier and the Fast-exit—a previously described method for speeding-up boosting classification, at the same accuracy. The proposed method as a general meta-algorithm is also useful for a boosting cascade, where it speeds up individual stage classifiers by different gains. The proposed method is further demonstrated for fast-moving object tracking and segmentation problems.

Keywords

Boosting Decision tree Decision regions Boolean optimisation Boosting cascade Face detection Tracking Segmentation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avidan, S. (2006). SpatialBoost: adding spatial reasoning to adaboost. In Proc. ECCV, Graz, Austria. Google Scholar
  2. Basak, J. (2004). Online adaptive decision trees. Journal of Neural Computation, 16, 1959–1981. zbMATHCrossRefGoogle Scholar
  3. Brostow, G., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In Proc. ECCV, Marseilles. Google Scholar
  4. Chen, J. (1994). Application of Boolean expression minimization to learning via hierarchical generalization. In Proc. ACM symposium on applied computing (pp. 303–307). Google Scholar
  5. Cormen, T., Leiserson, C., Rivest, R., & Stein, C. (2001). Introduction to algorithms. Cambridge: MIT Press and McGraw-Hill. zbMATHGoogle Scholar
  6. Esposito, F., Malerba, D., Semeraro, G., & Kay, J. (1997). A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 476–491. CrossRefGoogle Scholar
  7. Freund, Y., & Mason, L. (1999). The alternating decision tree learning algorithm. In Proc. ICML. Google Scholar
  8. Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. MathSciNetzbMATHCrossRefGoogle Scholar
  9. Friedman, J., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 337–407. MathSciNetzbMATHCrossRefGoogle Scholar
  10. Grabner, H., & Bischof, H. (2006). On-line boosting and vision. In Proc. IEEE conf. CVPR (pp. 260–267). Google Scholar
  11. Grossmann, E. (2004a). AdaTree: boosting a weak classifier into a decision tree. In IEEE workshop on learning in computer vision and pattern recognition (pp. 105–105). CrossRefGoogle Scholar
  12. Grossmann, E. (2004b) Adatree 2: boosting to build decision trees or Improving Adatree with soft splitting rules (Technical report). Google Scholar
  13. Huang, C., Ai, H., Li, Y., & Lao, S. (2005). Vector boosting for rotation invariant multi-view face detection. In Proc. ICCV. Google Scholar
  14. Kim, T.-K., Kim, H., Hwang, W., & Kittler, J. (2005). Component-based LDA face description for image retrieval and MPEG-7 standardisation. Image and Vision Computing, 23(7), 631–642. CrossRefGoogle Scholar
  15. Li, S. Z., & Zhang, Z. (2004). Floatboost learning and statistical face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1112–1123. CrossRefGoogle Scholar
  16. Mason, L., Baxter, J., Bartlett, P., & Frean, M. (2000). Boosting algorithms as gradient descent. In Proc. advances in neural information processing systems (pp. 512–518). Google Scholar
  17. Pham, M., & Cham, T. (2007). Fast training and selection of Haar features using statistics in boosting-based face detection. In Proc. ICCV. Google Scholar
  18. Quinlan, J. (1996). Bagging, boosting, and c4.5. In Proc. national. conf. on artificial intelligence (pp. 725–730). Google Scholar
  19. Rahimi, A., & Recht, B. (2008). Random kitchen sinks: replacing optimization with randomization in learning. In Proc. neural information processing systems. Google Scholar
  20. Ross, D., Lim, J., Lin, R., & Yang, M. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1), 125–141. CrossRefGoogle Scholar
  21. Rowley, H., Baluja, S., & Kanade, T. (1998). Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 22–38. CrossRefGoogle Scholar
  22. Schapire, R. E., & Singer, Y. (1998). Improved boosting algorithms using confidence-rated predictions. In Proc. the eleventh annual conference on computational learning theory (pp. 80–91). CrossRefGoogle Scholar
  23. Schwender, H. (2007). Minimization of boolean expressions using matrix algebra (Technical report). Collaborative Research Center SFB 475, University of Dortmund. Google Scholar
  24. Sochman, J., & Matas, J. (2005). WaldBoost learning for time constrained sequential detection. In Proc. CVPR, San Diego, USA. Google Scholar
  25. Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 854–869. CrossRefGoogle Scholar
  26. Tu, Z. (2005). Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering. In Proc. ICCV. Google Scholar
  27. Viola, P., & Jones, M. (2001). Robust real-time object detection. In 2nd intl. workshop on statistical and computational theories of vision. Google Scholar
  28. Viola, P., & Jones, M. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154. CrossRefGoogle Scholar
  29. Wu, B., & Nevatia, R. (2007). Cluster boosted tree classifier for multi-view, multi-pose object detection. In Proc. ICCV. Google Scholar
  30. Xiao, R., Zhu, L., & Zhang, H. (2003). Boosting chain learning for object detection. In Proc. ICCV. Google Scholar
  31. Yeh, T., Lee, J., & Darrell, T. (2007). Adaptive vocabulary forests for dynamic indexing and category learning. In Proc. ICCV. Google Scholar
  32. Zhou, S. (2005). A binary decision tree implementation of a boosted strong classifier. In IEEE Workshop on analysis and modeling of faces and gestures (pp. 198–212). CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Electrical and Electronic EngineeringImperial College London, South Kensington CampusLondonUK
  2. 2.Department of EngineeringUniversity of CambridgeCambridgeUK

Personalised recommendations