MIForests: Multiple-Instance Learning with Randomized Trees

  • Christian Leistner
  • Amir Saffari
  • Horst Bischof
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6316)


Multiple-instance learning (MIL) allows for training classifiers from ambiguously labeled data. In computer vision, this learning paradigm has been recently used in many applications such as object classification, detection and tracking. This paper presents a novel multiple-instance learning algorithm for randomized trees called MIForests. Randomized trees are fast, inherently parallel and multi-class and are thus increasingly popular in computer vision. MIForest combine the advantages of these classifiers with the flexibility of multiple instance learning. In order to leverage the randomized trees for MIL, we define the hidden class labels inside target bags as random variables. These random variables are optimized by training random forests and using a fast iterative homotopy method for solving the non-convex optimization problem. Additionally, most previously proposed MIL approaches operate in batch or off-line mode and thus assume access to the entire training set. This limits their applicability in scenarios where the data arrives sequentially and in dynamic environments. We show that MIForests are not limited to off-line problems and present an on-line extension of our approach. In the experiments, we evaluate MIForests on standard visual MIL benchmark datasets where we achieve state-of-the-art results while being faster than previous approaches and being able to inherently solve multi-class problems. The on-line version of MIForests is evaluated on visual object tracking where we outperform the state-of-the-art method based on boosting.


Random Forest Multiple Instance Learning Instance Label Deterministic Annealing Label Noise 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

978-3-642-15567-3_3_MOESM1_ESM.avi (6.2 mb)
Electronic Supplementary Material (6,395 KB)


  1. 1.
    Keeler, J., Rumelhart, D., Leow, W.: Integrated segmentation and recognition of hand-printed numerals. In: NIPS (1990)Google Scholar
  2. 2.
    Dietterich, T., Lathrop, R., Lozano-Perez, T.: Solving the multiple-instance problem with axis-paralle rectangles. In: Artifical Intelligence (1997)Google Scholar
  3. 3.
    Andrews, S., Tsochandaridis, I., Hofman, T.: Support vector machines for multiple-instance learning. Adv. Neural. Inf. Process. Syst. 15, 561–568 (2003)Google Scholar
  4. 4.
    Ruffo, G.: Learning single and multiple instance decision trees for computer security applications. PhD thesis (2000)Google Scholar
  5. 5.
    Zhang, M.L., Goldman, S.: Em-dd: An improved multi-instance learning technique. In: NIPS (2002)Google Scholar
  6. 6.
    Vijayanarasimhan, S., Grauman, K.: Keywords to visual categories: Multiple-instance learning for weakly supervised object categorization. In: CVPR (2008)Google Scholar
  7. 7.
    Chen, Y., Bi, J., Wang, J.: Miles: Multiple-instance learning via embedded instance selection. In: IEEE PAMI (2006)Google Scholar
  8. 8.
    Viola, P., Platt, J., Zhang, C.: Multiple instance boosting for object detection. In: NIPS (2006)Google Scholar
  9. 9.
    Zhang, C., Viola, P.: Multiple-instance pruning for learning efficient cascade detectors. In: NIPS (2008)Google Scholar
  10. 10.
    Stikic, M., Schiele, B.: Activity recognition from sparsely labeled data using multi-instance learning. In: Choudhury, T., Quigley, A., Strang, T., Suginuma, K. (eds.) LoCA 2009. LNCS, vol. 5561, pp. 156–173. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  11. 11.
    Vezhnevets, A., Buhmann, J.: Towards weakly supervised semantic segmentation by means of multiple instance and multitask learning. In: CVPR (2010)Google Scholar
  12. 12.
    Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. In: CVPR (2009)Google Scholar
  13. 13.
    Breiman, L.: Random forests. In: Machine Learning (2001)Google Scholar
  14. 14.
    Moosmann, F., Triggs, B., Jurie, F.: Fast discriminative visual codebooks using randomized clustering forests. In: NIPS, pp. 985–992 (2006)Google Scholar
  15. 15.
    Caruana, R., Karampatziakis, N., Yessenalina, A.: An empirical evaluation of supervised learning in high dimensions. In: ICML (2008)Google Scholar
  16. 16.
    Sharp, T.: Implementing decision trees and forests on a gpu. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 595–608. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Gall, J., Lempinsky, V.: Class-specific hough forests for object detection. In: CVPR (2009)Google Scholar
  18. 18.
    Shotton, J., Johnson, M., Cipolla, R.: Semantic texton forests for image catergorization and segmentation. In: CVPR (2008)Google Scholar
  19. 19.
    Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: ICCV (2007)Google Scholar
  20. 20.
    Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. In: CVPR (2006)Google Scholar
  21. 21.
    Saffari, A., Leistner, C., Godec, M., Santner, J., Bischof, H.: On-line random forests. In: OLCV (2009)Google Scholar
  22. 22.
    Blum, A., Kalai, A.: A note on learning from multiple instance examples. In: Machine Learning, pp. 23–29 (1998)Google Scholar
  23. 23.
    Mangasarian, O., Wild, E.: Multiple-instance learning via successive linear programming. Technical report (2005)Google Scholar
  24. 24.
    Wang, J., Zucker, J.D.: Solving the multiple-instance problem: A lazy learning approach. In: ICML (2000)Google Scholar
  25. 25.
    Maron, O., Lozano-Perez, T.: A framework for multiple-instance learning. In: NIPS (1997)Google Scholar
  26. 26.
    Zhang, Q., Goldman, S.: Em-dd: An improved multiple instance learning technique. In: NIPS (2001)Google Scholar
  27. 27.
    Foulds, J., Frank, E.: Revisiting multi-instance learning via embedded instance selection. LNCS. Springer, Heidelberg (2008)Google Scholar
  28. 28.
    Blockeel, H., Page, D., Srinivasan, A.: Multi-instance tree learning. In: ICML (2005)Google Scholar
  29. 29.
    Geman, Y.A.D.: Shape quantization and recognition with randomized trees. Neural Computation (1996)Google Scholar
  30. 30.
    Rose, K.: Deterministic annealing, constrained clustering, and optimization. In: IJCNN (1998)Google Scholar
  31. 31.
    Gehler, P., Chapelle, O.: Deterministic annealing for multiple-instance learning. In: AISTATS (2007)Google Scholar
  32. 32.
    Leistner, C., Saffari, A., Santner, J., Bischof, H.: Semi-supervised random forests. In: ICCV (2009)Google Scholar
  33. 33.
    Oza, N., Russell, S.: Online bagging and boosting. In: Proceedings Artificial Intelligence and Statistics, pp. 105–112 (2001)Google Scholar
  34. 34.
    Pakkanen, J., Iivarinen, J., Oja, E.: The evolving tree—a novel self-organizing network for data analysis. Neural Process. Lett. 20, 199–211 (2004)CrossRefGoogle Scholar
  35. 35.
    Bunescu, R., Mooney, R.: Multiple instance learning for sparse positive bags. In: ICML (2007)Google Scholar
  36. 36.
    Zhou, Z.H., Sun, Y.Y., Li, Y.F.: Multi-instance learning by treating instances as non-i.i.d. samples. In: ICML (2009)Google Scholar
  37. 37.
    Avidan, S.: Ensemble tracking. In: CVPR, vol. 2, pp. 494–501 (2005)Google Scholar
  38. 38.
    Grabner, H., Bischof, H.: On-line boosting and vision. In: CVPR (2006)Google Scholar
  39. 39.
    Ross, D., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. In: IJCV (2008)Google Scholar
  40. 40.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: CVPR (2006)Google Scholar
  41. 41.
    Everingham, M., Gool, L.V., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object class challenge 2007. In: VOC (2007)Google Scholar
  42. 42.
    Grabner, H., Leistner, C., Bischof, H.: On-line semi-supervised boosting for robust tracking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 234–247. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Christian Leistner
    • 1
  • Amir Saffari
    • 1
  • Horst Bischof
    • 1
  1. 1.Institute for Computer Graphics and VisionGraz University of TechnologyGrazAustria

Personalised recommendations