A Boosting Approach to Multiple Instance Learning

  • Peter Auer
  • Ronald Ortner
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3201)


In this paper we present a boosting approach to multiple instance learning. As weak hypotheses we use balls (with respect to various metrics) centered at instances of positive bags. For the ∞-norm these hypotheses can be modified into hyper-rectangles by a greedy algorithm. Our approach includes a stopping criterion for the algorithm based on estimates for the generalization error. These estimates can also be used to choose a preferable metric and data normalization. Compared to other approaches our algorithm delivers improved or at least competitive results on several multiple instance benchmark data sets.


Neural Information Processing System Generalization Error Multiple Instance Weak Hypothesis Multiple Instance Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Amar, R.A., Dooly, D.R., Goldman, S.A., Zhang, Q.: Multiple-instance learning of real-valued data. In: Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pp. 3–10. Morgan Kaufmann, San Francisco (2001)Google Scholar
  2. 2.
    Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple instance learning. In: Advances in Neural Information Processing Systems 15. Papers from Neural Information Processing Systems (NIPS 2002), MIT Press, Cambridge (2002)Google Scholar
  3. 3.
    Auer, P.: On learning from multi-instance examples: Empirical evaluation of a theoretical approach. In: Proceedings of the Fourteenth International Conference on Machine Learning (ICML 1997), pp. 21–29. Morgan Kaufmann, San Francisco (1997)Google Scholar
  4. 4.
    Auer, P., Long, P.M., Srinivasan, A.: Approximating hyper-rectangles: Learning and pseudorandom sets. J. Comput. Syst. Sci. 57(3), 376–388 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases. Irvine, CA: University of California, Department of Information and Computer Science (1998),
  6. 6.
    Dietterich, T.G., Lathrop, R.H., Lozano-Pérez, T.: Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell. 89(1-2), 31–71 (1997)zbMATHCrossRefGoogle Scholar
  7. 7.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    Goldman, S.A., Kwek, S.K., Scott, S.D.: Agnostic learning of geometric patterns. J. Comput. Syst. Sci. 62(1), 123–151 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Gärtner, T., Flach, P.A., Kowalczyk, A., Smola, A.J.: Multi-instance kernels. In: Machine Learning. Proceedings of the Nineteenth International Conference (ICML 2002), pp. 179–186. Morgan Kaufmann, San Francisco (2002)Google Scholar
  10. 10.
    Fussenegger, M., Opelt, A., Pinz, A., Auer, P.: Object recognition using segmentation for feature detection. Accepted for ICPR 2004 (2004)Google Scholar
  11. 11.
    Klivans, A., Servedio, R.A.: Learning DNF in time 2O(n1/3). In: Proceedings on 33rd Annual ACM Symposium on Theory of Computing (STOC 2001), pp. 258–265. ACM, New York (2001)Google Scholar
  12. 12.
    Littlestone, N.: Learning quickly when irrelevant attributes abound: A new linear threshold algorithm. Machine Learning 2(4), 285–318 (1988)Google Scholar
  13. 13.
    Maron, O.: Learning from ambiguity. Technical Report AITR-1639 (1998)Google Scholar
  14. 14.
    Maron, O., Lozano-Pérez, T.: A framework for multiple-instance learning. In: Advances in Neural Information Processing Systems 10 (NIPS 1997), MIT Press, Cambridge (1998)Google Scholar
  15. 15.
    Maron, O., Ratan, A.L.: Multiple-instance learning for natural scene classification. In: Proceedings of the Fifteenth International Conference on Machine Learning (ICML 1998), pp. 341–349. Morgan Kaufmann, San Francisco (1998)Google Scholar
  16. 16.
    Opelt, A., Fussenegger, M., Pinz, A., Auer, P.: Weak hypotheses and boosting for generic object detection and recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022, pp. 71–84. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  17. 17.
    Ramon, J., Raedt, L.D.: Multi instance neural networks. In: Proceedings of IMCL 2000 workshop on Attribute-Value and Relational Learning (2000)Google Scholar
  18. 18.
    Ruffo, G.: Learning single and multiple instance decision trees for computer security applications. Doctoral dissertation. Department of Computer Science, University of Turin, Torino, Italy (2000)Google Scholar
  19. 19.
    Scott, S.D., Zhang, J., Brown, J.: On generalized multiple-instance learning. Technical report UNL-CSE-2003-5, University of Nebraska (2003)Google Scholar
  20. 20.
    Schapire, R.: The boosting approach to machine learning: An overview. In: MSRI Workshop on Nonlinear Estimation and Classification, Berkeley, CA (March 2001)Google Scholar
  21. 21.
    Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new explanation for the effectiveness of voting methods. Ann. Statist. 25(5), 1651–1686 (1998)MathSciNetGoogle Scholar
  22. 22.
    Wang, J., Zucker, J.D.: Solving the multiple-instance problem: A lazy learning approach. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pp. 1119–1126. Morgan Kaufmann, San Francisco (2000)Google Scholar
  23. 23.
    Wang, C., Scott, S.D., Zhang, J., Tao, Q., Fomenko, D.E., Gladyshev, V.N.: A study in modeling low-conservation protein superfamilies. Technical report UNLCSE-2004-0003, University of Nebraska (2004)Google Scholar
  24. 24.
    Zhang, Q., Goldman, S.: EM-DD: An improved multiple-instance learning technique. In: Advances in Neural Information Processing Systems 14 (NIPS 2001), pp. 1073–1080. MIT Press, Cambridge (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Peter Auer
    • 1
  • Ronald Ortner
    • 1
  1. 1.University of LeobenLeobenAustria

Personalised recommendations