Advertisement

Revisiting Multiple-Instance Learning Via Embedded Instance Selection

  • James Foulds
  • Eibe Frank
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5360)

Abstract

Multiple-Instance Learning via Embedded Instance Selection (MILES) is a recently proposed multiple-instance (MI) classification algorithm that applies a single-instance base learner to a propositionalized version of MI data. However, the original authors consider only one single-instance base learner for the algorithm — the 1-norm SVM. We present an empirical study investigating the efficacy of alternative base learners for MILES, and compare MILES to other MI algorithms. Our results show that boosted decision stumps can in some cases provide better classification accuracy than the 1-norm SVM as a base learner for MILES. Although MILES provides competitive performance when compared to other MI learners, we identify simpler propositionalization methods that require shorter training times while retaining MILES’ strong classification performance on the datasets we tested.

Keywords

Support Vector Machine Random Forest Base Learner Linear Support Vector Machine Decision Stump 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: NIPS, pp. 577–584 (2002)Google Scholar
  2. 2.
    Auer, P., Ortner, R.: A boosting approach to multiple instance learning. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 63–74. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Braddock, P.S., Hu, D.E., Fan, T.P., Stratford, I.J., Harris, A.L., Bicknell, R.: A structure-activity analysis of antagonism of the growth factor and angiogenic activity of basic fibroblast growth factor by suramin and related polyanions. Br. J. Cancer 69(5), 890–898 (1994)CrossRefGoogle Scholar
  4. 4.
    Breiman, L.: Bagging predictors. ML 24(2), 123–140 (1996)zbMATHGoogle Scholar
  5. 5.
    Breiman, L.: Random forests. ML 45(1), 5–32 (2001)zbMATHGoogle Scholar
  6. 6.
    Chen, Y., Bi, J., Wang, J.Z.: MILES: Multiple-instance learning via embedded instance selection. IEEE PAMI 28(12), 1931–1947 (2006)CrossRefGoogle Scholar
  7. 7.
    Dietterich, T.G., Lathrop, R.H., Lozano-Perez, T.: Solving the multiple instance problem with axis-parallel rectangles. AI 89(1-2), 31–71 (1997)zbMATHGoogle Scholar
  8. 8.
    Dong, L.: A comparison of multi-instance learning algorithms. Master’s thesis, University of Waikato (2006)Google Scholar
  9. 9.
    Foulds, J.: Learning instance weights in multi-instance learning. Master’s thesis, University of Waikato (2008)Google Scholar
  10. 10.
    Frank, E., Xu, X.: Applying propositional learning algorithms to multi-instance data. Technical report, Dept. of Computer Science, University of Waikato (2003)Google Scholar
  11. 11.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: ICML, pp. 148–156 (1996)Google Scholar
  12. 12.
    Gärtner, T., Flach, P.A., Kowalczyk, A., Smola, A.J.: Multi-instance kernels. In: ICML, pp. 179–186 (2002)Google Scholar
  13. 13.
    Landwehr, N., Hall, M., Frank, E.: Logistic model trees. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 241–252. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Maron, O., Lozano-Pérez, T.: A framework for multiple-instance learning. In: NIPS (1997)Google Scholar
  15. 15.
    Mayo, M.: Effective classifiers for detecting objects. In: CIRAS (2007)Google Scholar
  16. 16.
    Michie, D., Muggleton, S., Page, D., Srinivasan, A.: A new East-West challenge. Technical report, Oxford University Computing Laboratory (1994)Google Scholar
  17. 17.
    Nadeau, C., Bengio, Y.: Inference for the Generalization Error. ML 52(3), 239–281 (2003)zbMATHGoogle Scholar
  18. 18.
    Opelt, A., Pinz, A., Fussenegger, M., Auer, P.: Generic object recognition with boosting. IEEE PAMI 28(3), 416–431 (2006)CrossRefzbMATHGoogle Scholar
  19. 19.
    Platt, J.: Fast training of support vector machines using sequential minimal optimization. Advances in kernel methods: support vector learning, 185–208 (1999)Google Scholar
  20. 20.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  21. 21.
    Reutemann, P.: Development of a propositionalization toolbox. Master’s thesis, Albert Ludwigs University of Freiburg (2004)Google Scholar
  22. 22.
    Srinivasan, A., Muggleton, S., King, R.D., Sternberg, M.J.E.: Mutagenesis: ILP experiments in a non-determinate biological domain. In: ILP, pp. 217–232 (1994)Google Scholar
  23. 23.
    Wang, C., Scott, S.D., Zhang, J., Tao, Q., Fomenko, D., Gladyshev, V.: A study in modeling low-conservation protein superfamilies. Technical report, Dept. of Comp. Sci., University of Nebraska-Lincoln (2004)Google Scholar
  24. 24.
    Wang, J., Zucker, J.-D.: Solving the multiple-instance problem: A lazy learning approach. In: ICML, pp. 1119–1125 (2000)Google Scholar
  25. 25.
    Weidmann, N., Frank, E., Pfahringer, B.: A two-level learning method for generalized multi-instance problems. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS (LNAI), vol. 2837, pp. 468–479. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  26. 26.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  27. 27.
    Xu, X., Frank, E.: Logistic regression and boosting for labeled bags of instances. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 272–281. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  28. 28.
    Zhang, M.-L., Zhou, Z.-H.: Multi-instance clustering with applications to multi-instance prediction. Applied Intelligence (in press)Google Scholar
  29. 29.
    Zhang, Q., Goldman, S.: EM-DD: An improved multiple-instance learning technique. In: NIPS, pp. 1073–1080 (2001)Google Scholar
  30. 30.
    Zhou, Z.-H., Zhang, M.-L.: Solving multi-instance problems with classifier ensemble based on constructive clustering. KAIS 11(2), 155–170 (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • James Foulds
    • 1
  • Eibe Frank
    • 1
  1. 1.Department of Computer ScienceUniversity of WaikatoNew Zealand

Personalised recommendations