Advertisement

A Two-Level Learning Method for Generalized Multi-instance Problems

  • Nils Weidmann
  • Eibe Frank
  • Bernhard Pfahringer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2837)

Abstract

In traditional multi-instance (MI) learning, a single positive instance in a bag produces a positive class label. Hence, the learner knows how the bag’s class label depends on the labels of the instances in the bag and can explicitly use this information to solve the learning task. In this paper we investigate a generalized view of the MI problem where this simple assumption no longer holds. We assume that an “interaction” between instances in a bag determines the class label. Our two-level learning method for this type of problem transforms an MI bag into a single meta-instance that can be learned by a standard propositional method. The meta-instance indicates which regions in the instance space are covered by instances of the bag. Results on both artificial and real-world data show that this two-level classification approach is well suited for generalized MI problems.

Keywords

Class Label Irrelevant Attribute Positive Instance Attribute Selection Random Instance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Dietterich, T.G., Lathrop, R.H., Lozano-Perez, T.: Solving the Multiple Instance Problem with Axis-Parallel Rectangles. Artificial Intelligence 89, 31–71 (1997)zbMATHCrossRefGoogle Scholar
  2. 2.
    Gärtner, T., Flach, P.A., Kowalczyk, A., Smola, A.J.: Multi-Instance Kernels. In: Proc. 19th Int. Conf. on Machine Learning, pp. 179–186. Morgan Kaufmann, San Francisco (2002)Google Scholar
  3. 3.
    Ramon, J., De Raedt, L.: Multi Instance Neural Networks. In: Proc. ICML Workshop on Attribute-Value and Relational Learning (2000)Google Scholar
  4. 4.
    Maron, O., Lozano-Pérez, T.: A Framework for Multiple-Instance Learning. In: Advances in Neural Information Processing Systems, vol. 10, MIT Press, Cambridge (1998)Google Scholar
  5. 5.
    Zucker, J.D., Chevaleyre, Y.: Solving multiple-instance and multiple-part learning problems with decision trees and decision rules. Application to the mutagenesis problem. In: Internal Report, University of Paris 6 (2000)Google Scholar
  6. 6.
    De Raedt, L.: Attribute-Value Learning Versus Inductive Logic Programming: The Missing Links. In: Page, D.L. (ed.) ILP 1998. LNCS, vol. 1446, pp. 1–8. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  7. 7.
    Kohavi, R., John, G.H.: Wrappers for Feature Subset Selection. Artificial Intelligence 97, 273–324 (1997)zbMATHCrossRefGoogle Scholar
  8. 8.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: A statistical view of boosting. Annals of Statistics 28, 307–337 (2000)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  10. 10.
    Frank, E., Witten, I.: Making Better Use of Global Discretization. In: Proc. 16th Int. Conf. on Machine Learning, pp. 115–123. Morgan Kaufmann, San Francisco (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Nils Weidmann
    • 1
    • 2
  • Eibe Frank
    • 2
  • Bernhard Pfahringer
    • 2
  1. 1.Department of Computer ScienceUniversity of FreiburgFreiburgGermany
  2. 2.Department of Computer ScienceUniversity of WaikatoHamiltonNew Zealand

Personalised recommendations