Advertisement

A Probabilistic Approach to Multiple-Instance Learning

  • Silu ZhangEmail author
  • Yixin Chen
  • Dawn Wilkins
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10330)

Abstract

This paper introduced a probabilistic approach to the multiple-instance learning (MIL) problem with two Bayes classification algorithms. The first algorithm, named Instance-Vote, provides a simple approach for posterior probability estimation. The second algorithm, Embedded Kernel Density Estimation (EKDE), enables data visualization during classification. Both algorithms were evaluated using MUSK benchmark data sets and the results are highly competitive with existing methods.

Keywords

Multiple-instance learning Non-linear dimensionality reduction Data visualization 

Notes

Acknowledgements

This work was supported by the Department of Computer and Information Science, University of Mississippi.

References

  1. 1.
    Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: Advances in Neural Information Processing Systems, pp. 577–584 (2003)Google Scholar
  2. 2.
    Burbidge, R., Trotter, M., Buxton, B., Holden, S.: Drug design by machine learning: support vector machines for pharmaceutical data analysis. Comput. Chem. 26(1), 5–14 (2001)CrossRefGoogle Scholar
  3. 3.
    Chen, Y., Bi, J., Wang, J.Z.: Miles: multiple-instance learning via embedded instance selection. IEEE Trans. Pattern Anal. Mach. Intell. 28(12), 1931–1947 (2006)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T.G., Lathrop, R.H., Lozano-Pérez, T.: Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell. 89(1), 31–71 (1997)CrossRefzbMATHGoogle Scholar
  5. 5.
    Eksi, R., Li, H.D., Menon, R., Wen, Y., Omenn, G.S., Kretzler, M., Guan, Y.: Systematically differentiating functions for alternatively spliced isoforms through integrating RNA-seq data. PLoS Comput. Biol. 9(11), e1003314 (2013)CrossRefGoogle Scholar
  6. 6.
    Huang, J., Ling, C.X.: Using AUC and accuracy in evaluating learning algorithms. IEEE Trans. knowl. Data Eng. 17(3), 299–310 (2005)CrossRefGoogle Scholar
  7. 7.
    van der Maaten, L.: Learning a parametric embedding by preserving local structure. RBM 500(500), 26 (2009)Google Scholar
  8. 8.
    van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)zbMATHGoogle Scholar
  9. 9.
  10. 10.
    Ray, S., Craven, M.: Supervised versus multiple instance learning: an empirical comparison. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 697–704. ACM (2005)Google Scholar
  11. 11.
    Raykar, V.C., Krishnapuram, B., Bi, J., Dundar, M., Rao, R.B.: Bayesian multiple instance learning: automatic feature selection and inductive transfer. In: Proceedings of the 25th International Conference on Machine Learning, pp. 808–815. ACM (2008)Google Scholar
  12. 12.
    Warmuth, M.K., Liao, J., Rätsch, G., Mathieson, M., Putta, S., Lemmen, C.: Active learning with support vector machines in the drug discovery process. J. Chem. Inf. Comput. Sci. 43(2), 667–673 (2003)CrossRefGoogle Scholar
  13. 13.
    Xu, X., Frank, E.: Logistic regression and boosting for labeled bags of instances. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS, vol. 3056, pp. 272–281. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-24775-3_35 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer and Information ScienceThe University of MississippiUniversityUSA

Personalised recommendations