Adaptive Kernel Diverse Density Estimate for Multiple Instance Learning

  • Tao Xu
  • Iker Gondra
  • David Chiu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6871)

Abstract

We present AKDDE, an adaptive kernel diverse density estimate scheme for multiple instance learning. AKDDE revises the definition of diverse density as the kernel density estimate of diverse positive bags. We show that the AKDDE is inversely proportional to the least bound that contains at least one instance from each positive bag. In order to incorporate the influence of negative bags an objective function is constructed as the difference between the AKDDE of positive bags and the kernel density estimate of negative ones. This scheme is simple in concept and has better properties than other MIL methods. We validate AKDDE on both synthetic and real-world benchmark MIL datasets.

Keywords

Positive Instance Negative Instance Active Instance Multiple Instance Learning Instance Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. In: Advances in Neural Information Processing Systems, vol. 15, pp. 561–568 (2003)Google Scholar
  2. 2.
    Cao, R., Cuevas, A., Manteiga, W.G.: A comparative study of several smoothing methods in density estimation. Computational Statistics and Data Analysis 17(2), 153–176 (1994)CrossRefMATHGoogle Scholar
  3. 3.
    Chevaleyre, Y., Zucker, J.D.: Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. Application to the mutagenesis problem. In: Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence, pp. 204–214 (2001)Google Scholar
  4. 4.
    Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(5), 603–619 (2002)CrossRefGoogle Scholar
  5. 5.
    Dietterich, T.G., Lathrop, R.H., Pérez, T.L.: Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence 89(1-2), 31–71 (1997)CrossRefMATHGoogle Scholar
  6. 6.
    Epanechnikov, V.: Non-parametric estimation of a multivariate probability density. Theory of Probability and its Applications, 153–158 (1969)Google Scholar
  7. 7.
    Gondra, I., Xu, T.: Adaptive mean shift-based image segmentation using multiple instance learning. In: Proceedings of the Third IEEE International Conference on Digital Information Management, pp. 716–721 (2008)Google Scholar
  8. 8.
    Gondra, I., Xu, T.: Image region re-weighting via multiple instance learning. Signal, Image and Video Processing 4(4), 409–417 (2010)CrossRefMATHGoogle Scholar
  9. 9.
    Jones, M., Marron, J., Sheather, S.J.: A brief survey of bandwidth selection for density estimation. Journal of the American Statistical Association 91, 401–407 (1996)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Maron, O., Lakshmi Ratan, A.: Multiple-instance learning for natural scene classification. In: Proceedings of the 15th International Conference on Machine Learning, vol. 15, pp. 341–349 (1998)Google Scholar
  11. 11.
    Maron, O., Perez, T.L.: A framework for multiple-instance learning. In: Proceedings of the Advances in Neural Information Processing Systems, pp. 570–576 (1998)Google Scholar
  12. 12.
    Park, B., Marron, J.: Comparison of data-driven bandwidth selectors. Journal of the American Statistical Society 85, 66–72 (1990)CrossRefGoogle Scholar
  13. 13.
    Park, B., Turlach, B.: Practical performance of several data driven bandwidth selectors (with discussion). Computational Statistics 7, 251–270 (1992)MATHGoogle Scholar
  14. 14.
    Parzen, E.: On estimation of a probability density function and mode. Annals of Mathematical Statistics 33, 1065–1076 (1962)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Rosenblatt, M.: Remarks on some nonparametric estimates of a density function. Annals of Mathematical Statistics 27, 832–837 (1956)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Tao, Q., Scott, S., Vinodchandran, N.V., Osugi, T.T.: SVM-based generalized multiple-instance learning via approximate box counting. In: Proceedings of the 21st International Conference on Machine Learning, pp. 779–806 (2004)Google Scholar
  17. 17.
    Terrell, D.G., Scott, D.W.: Variable kernel density estimation. Annals of Statistics 20, 1236–1265 (1992)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Wang, J., Zucker, J.D.: Solving the multiple-instance problem: A lazy learning approach. In: Proceedings of the 17th International Conference on Machine Learning, pp. 1119–1126 (2000)Google Scholar
  19. 19.
    Yang, C., Lozano-Pérez, T.: Image database retrieval with multiple-instance learning techniques. In: Proceedings of IEEE International Conference on Data Engineering, pp. 233–243 (2000)Google Scholar
  20. 20.
    Zhang, M.L., Zhou, Z.H.: Adapting RBF neural networks to multi-instance learning. Neural Process. Lett. 23(1), 1–26 (2006)CrossRefGoogle Scholar
  21. 21.
    Zhang, Q., Goldman, S.A.: EM-DD: An improved multiple-instance learning technique. In: Advances in Neural Information Processing Systems, vol. 14, pp. 1073–1080. MIT Press, Cambridge (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Tao Xu
    • 1
  • Iker Gondra
    • 2
  • David Chiu
    • 1
  1. 1.School of Computer ScienceUniversity of GuelphCanada
  2. 2.Department of Mathematics, Statistics, and Computer ScienceSt. Francis Xavier UniversityNova ScotiaCanada

Personalised recommendations