Advertisement

Abstract

In Multiple Instance Learning (MIL) problems, objects are represented by a set of feature vectors, in contrast to the standard pattern recognition problems, where objects are represented by a single feature vector. Numerous classifiers have been proposed to solve this type of MIL classification problem. Unfortunately only two datasets are standard in this field (MUSK-1 and MUSK-2), and all classifiers are evaluated on these datasets using the standard classification error. In practice it is very informative to investigate their learning curves, i.e. the performance on train and test set for varying number of training objects. This paper offers an evaluation of several classifiers on the standard datasets MUSK-1 and MUSK-2 as a function of the training size. This suggests that for smaller datasets a Parzen density estimator may be preferrer over the other ’optimal’ classifiers given in the literature.

Keywords

Linear Discriminant Analysis Multiple Instance Learn Training Size Diverse Density Multiple Instance Learn Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Andrews, S., Hofmann, T., Tsochantaridis, I.: Multiple instance learning with generalized support vector machines. In: Proceedings of the AAAI National Conference on Artificial Intelligence (2002)Google Scholar
  2. 2.
    Bradley, A.P.: The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 30(7), 1145–1159 (1997)CrossRefGoogle Scholar
  3. 3.
    Cannon, A., Hush, D.: Multiple instance learning using simple classifiers. In: Proceedings of the international conference on machine learning and applications, pp. 123–128 (2004)Google Scholar
  4. 4.
    Chen, Y., Bi, J., Wang, J.Z.: MILES: Multiple-instance learning via embedded instance selection. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(12), 1931–1947 (2006)CrossRefGoogle Scholar
  5. 5.
    Cortes, C., Mohri, M.: AUC optimization vs. error rate minimization. In: Advances in Neural Information Processing Systems (NIPS 2003) (2004)Google Scholar
  6. 6.
    Dietterich, T.G., Lathrop, R.H., Lozano-Perez, T.: Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence 89(1-2), 31–71 (1997)CrossRefzbMATHGoogle Scholar
  7. 7.
    Duin, P., Juszcak, R.P.W., Paclik, P., Pekalska, E., de Ridder, D., Tax, D.M.J.: Prtools, a Matlab toolbox for pattern recognition, version 4.0 (January 2004)Google Scholar
  8. 8.
    Duin, R.P.W.: On the choice of the smoothing parameters for Parzen estimators of probability density functions. IEEE Transactions on Computers C-25(11), 1175–1179 (1976)CrossRefzbMATHGoogle Scholar
  9. 9.
    Gao, S., Sun, Q.: A generalized discriminative multiple instance learning for multimedia semantic concept detection. In: ICIP 2006, pp. 2901–2904 (2006)Google Scholar
  10. 10.
    Gärtner, T., Flach, P.A., Kowwalczyk, A., Smola, A.J.: Multi-instance kernels. In: Sammut, C., Hoffmann, A. (eds.) Proceedings of the 19th International Conference on Machine Learning, pp. 179–186. Morgan Kaufmann, San Francisco (2002)Google Scholar
  11. 11.
    Hand, D.J.: Construction and assessment of classification rules. Wiley, New York (1997)zbMATHGoogle Scholar
  12. 12.
    Ling, C.X., Huang, J., Zhang, H.: AUC, a better measure than accuracy in comparing learning algorithms. In: Proceedings of the 2003 Canadian artificial intelligence conference (2003)Google Scholar
  13. 13.
    Maron, O., Lozano-Pérez, T.: A framework for multiple-instance learning. In: Advances in Neural Information Processing Systems, vol. 10, pp. 570–576. MIT Press, Cambridge (1998)Google Scholar
  14. 14.
    Ray, S., Craven, M.: Supervised versus multiple instance learning: an empirical comparison. In: ICML 2005: Proceedings of the 22nd international conference on Machine learning, pp. 697–704. ACM, New York (2005)Google Scholar
  15. 15.
    Rosset, S.: Model selection via the AUC. In: ICML 2004, pp. 703–710 (2004)Google Scholar
  16. 16.
    Wang, J., Zucker, J.D.: Solving the multiple-instance problem: A lazy learning approach. In: Proc. 17th International Conf. on Machine Learning, pp. 1119–1125. Morgan Kaufmann, San Francisco (2000)Google Scholar
  17. 17.
    Xu, X., Frank, E.: Logistic regression and boosting for labeled bags of instances. In: Proc. of the Pacific-Asia conference on knowledge discovery and data mining. Springer, Heidelberg (2004)Google Scholar
  18. 18.
    Zhang, M.-L.: Matlab toolbox: Mil learners and their ensemble versionsGoogle Scholar
  19. 19.
    Zhang, Q., Goldman, S.: EM-DD: An improved multiple-instance learning technique. In: Advances in Neural Information Processing Systems, vol. 14. MIT Press, Cambridge (2002)Google Scholar
  20. 20.
    Zhou, Z.-H., Zhang, M.-L.: Ensembles of multi-instance learners. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) ECML 2003. LNCS, vol. 2837, pp. 492–502. Springer, Heidelberg (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • David M. J. Tax
    • 1
  • Robert P. W. Duin
    • 1
  1. 1.Delft University of TechnologyDelftThe Netherlands

Personalised recommendations