Multiple-Instance Learning with Evolutionary Instance Selection
Multiple-Instance Learning (MIL) represents a new class of supervised learning tasks, where training examples are bags of instances with labels only available for the bags. To solve the instance label ambiguity, instance selection based MIL models were proposed to convert bag learning to traditional vector learning. However, existing MIL instance selection approaches are all based on the instances inside the bags. In this case, at the original instance space, those potential informative instances, which do not occur in the bags are discarded. In this paper, we propose a novel learning method, MILEIS (Multiple-Instance Learning with Evolutionary Instance Selection), to adaptively determine the informative instances for feature mapping. The unique evolutionary search mechanism, including instance initialization, mutation, and crossover, ensures that MILEIS can adjust itself to the data without explicit specification of functional or distributional form for the underlying model. By doing so, MILEIS can also take full advantage of those creative informative instances to help feature mapping in an accurate way. Experiments and comparisons on real-world applications demonstrate the effectiveness of the proposed method.
KeywordsMultiple-instance learning Instance selection Feature mapping Evolutionary machine learning Classification
This work was supported in part by the National Nature Science Foundation of China (No. 61403351), the China Scholarship Council Foundation (No. 201206410056), the key project of the Natural Science Foundation of Hubei province, China under Grant No. 2013CFA004, the Australian Research Council (ARC) Discovery Projects under Grant No. DP140100545, the Self-Determined and Innovative Research Founds of CUG (No. 1610491T05) and the National College Students’ Innovation Entrepreneurial Training Plan of CUG (WuHan) (No. 201410491083).
- 2.Andrews, S., Tsochantaridis, I., Hofmann, T.: Support vector machines for multiple-instance learning. Adv. Neural Inf. Process. Syst. 15(2), 561–568 (2002)Google Scholar
- 3.Maron, O., Lozano-Prez, T.: A framework for multiple-instance learning. Adv. Neural Inf. Process. Syst. 200(2), 570–576 (1998)Google Scholar
- 4.Ray, S., Craven, M.: Supervised versus multiple instance learning: an empirical comparison. In: ICML, pp. 697–704 (2005)Google Scholar
- 5.Zhao, Z., Gang, F., Sheng, L., Elokely, K.M., Doerksen, R.J., Chen, Y., Wilkins, D.E.: Drug activity prediction using multiple-instance learning via joint instance and feature selection. BMC Bioinform. 14(Suppl. 14), 535–536 (2013)Google Scholar
- 10.Ali, K., Saenko, K.: Confidence-rated multiple instance boosting for object detection. In: CVPR, pp. 2433–2440 (2014)Google Scholar
- 14.Dong, L.: A comparison of multi-instance learning algorithms. University of Waikato (2006)Google Scholar
- 22.Wu, J., Zhu, X., Zhang, C., Cai, Z.: Multi-instance multi-graph dual embedding learning. In: ICDM, pp. 827–836 (2013)Google Scholar