Skip to main content

Multi-label Selective Ensemble

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9132))

Abstract

Multi-label selective ensemble deals with the problem of reducing the size of multi-label ensembles whilst keeping or improving the performance. In practice, it is of important value, since the generated ensembles are usually unnecessarily large, which leads to extra high computational and storage cost. However, it is more challenging than traditional selective ensemble, because real-world applications often employ different performance measures to evaluate the quality of multi-label predictions, depending on user requirements. In this paper, we propose the MUSE approach to tackle this problem. Specifically, by directly considering the concerned performance measure, we develop a convex optimization formulation and provide an efficient stochastic optimization solution for a large variety of multi-label performance measures. Experiments show that MUSE is able to obtain smaller multi-label ensembles, whilst achieving better or at least comparable performance in terms of the concerned performance measure.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Boutell, M., Luo, J., Shen, X., Brown, C.: Learning multi-label scene classification. Pattern Recogn. 37(9), 1757–1771 (2004)

    Article  Google Scholar 

  2. Bucak, S.S., Jin, R., Jain, A.: Multi-label learning with incomplete class assignments. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, pp. 2801–2808 (2011)

    Google Scholar 

  3. Dembczynski, K., Cheng, W., Hüllermeier, E.: Bayes optimal multilabel classification via probabilistic classifier chains. In: Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, pp. 279–286 (2010)

    Google Scholar 

  4. Dembczynski, K., Waegeman, W., Cheng, W., Hüllermeier, E.: Regret analysis for performance metrics in multi-label classification. In: Proceedings of the 21st European Conference on Machine Learning, Barcelona, Spain, pp. 280–295 (2010)

    Google Scholar 

  5. Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., Lin, C.-J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)

    MATH  Google Scholar 

  6. Fürnkranz, J., Hüllermeier, E., Mencía, E.L., Brinker, K.: Multilabel classification via calibrated label ranking. Mach. Learn. 73(2), 133–153 (2008)

    Article  Google Scholar 

  7. Gao, W., Zhou, Z.-H.: On the consistency of multi-label learning. Artif. Intell. 199–200, 22–44 (2013)

    Article  MathSciNet  Google Scholar 

  8. Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, Bremen, Germany, pp. 195–200 (2005)

    Google Scholar 

  9. Giacinto, G., Roli, F., Fumera, G.: Design of effective multiple classifier systems by clustering of classifiers. In: Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, pp. 160–163 (2000)

    Google Scholar 

  10. Hariharan, B., Zelnik-Manor, L., Vishwanathan, S., Varma, M.: Large scale max-margin multi-label classification with priors. In: Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, pp. 423–430 (2010)

    Google Scholar 

  11. Hsu, D., Kakade, S., Langford, J., Zhang, T.: Multilabel prediction via compressed sensing. In: Advances in Neural Information Processing Systems 22, pp. 772–780. MIT Press, Cambridge (2009)

    Google Scholar 

  12. Joachims, T.: A support vector method for multivariate performance measures. In: Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, pp. 377–384 (2005)

    Google Scholar 

  13. Le, Q., Smola, A.: Direct optimization of ranking measures (2007). CoRR. abs/0704.3359

    Google Scholar 

  14. Li, N., Zhou, Z.-H.: Selective ensemble under regularization framework. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 293–303. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  15. Li, N., Zhou, Z.-H.: Selective ensemble of classifier chains. In: Zhou, Z.-H., Roli, F., Kittler, J. (eds.) MCS 2013. LNCS, vol. 7872, pp. 146–156. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  16. McCallum, A.: Multi-label text classification with a mixture model trained by EM. In: Working Notes of AAAI 1999 Workshop on Text Learning (1999)

    Google Scholar 

  17. Nan, Y., Chai, K.M., Lee, W., Chieu, H.: Optimizing F-measure: a tale of two approaches. In: Proceedings of the 29th International Conference on Machine Learning, Edinburgh, UK, pp. 289–296 (2012)

    Google Scholar 

  18. Read, J., Pfahringer, B., Holmes, G.: Multi-label classification using ensembles of pruned sets. In: Proceedings of the 8th IEEE International Conference on Data Mining, Pisa, Italy, pp. 995–1000 (2008)

    Google Scholar 

  19. Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier chains for multi-label classification. Mach. Learn. 85(3), 333–359 (2011)

    Article  MathSciNet  Google Scholar 

  20. Schapire, R., Singer, Y.: BoosTexter: a boosting-based system for text categorization. Mach. Learn. 39(2–3), 135–168 (2000)

    Article  MATH  Google Scholar 

  21. Shalev-Shwartz, S., Tewari, A.: Stochastic methods for \(\ell _1\)-regularized loss minimization. J. Mach. Learn. Res. 12, 1865–1892 (2011)

    MATH  MathSciNet  Google Scholar 

  22. Shi, C., Kong, X., Yu, P., Wang, B.: Multi-label ensemble learning. In Proceedings of the 22nd European Conference on Machine learning, Athens, Greece, pp. 223–239 (2011)

    Google Scholar 

  23. Tsochantaridis, I., Joachims, T., Hofmann, T., Altun, Y.: Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res. 6, 1453–1484 (2005)

    MATH  MathSciNet  Google Scholar 

  24. Tsoumakas, G., Spyromitros-Xioufis, E., Vilcek, J., Vlahavas, I.: MULAN: a Java library for multi-label learning. J. Mach. Learn. Res. 12, 2411–2414 (2011)

    MATH  MathSciNet  Google Scholar 

  25. Tsoumakas, G., Vlahavas, I.: Random k-labelsets: an ensemble method for multilabel classification. In: Proceedings of the 18th European Conference on Machine Learning, Warsaw, Poland, pp. 406–417 (2007)

    Google Scholar 

  26. Turnbull, D., Barrington, L., Torres, D., Lanckriet, G.: Semantic annotation and retrieval of music and sound effects. IEEE Trans. Audio Speech Lang. Process. 16(2), 467–476 (2008)

    Article  Google Scholar 

  27. Ueda, N., Saito, K.: Parametric mixture models for multi-labeled text. In: Advances in Neural Information Processing Systems 15, pp. 721–728. MIT Press, Cambridge (2003)

    Google Scholar 

  28. Xu, M., Li, Y.-F., Zhou, Z.-H.: Multi-label learning with pro loss. In: Proceedings of the 27th AAAI Conference on Artificial Intelligence, Bellevue, WA, pp. 998–1004 (2013)

    Google Scholar 

  29. Yue, Y., Finley, T., Radlinski, F., Joachims, T.: A support vector method for optimizing average precision. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Amsterdam, Netherlands, pp. 271–278 (2007)

    Google Scholar 

  30. Zhang, M.-L., Zhang, K.: Multi-label learning by exploiting label dependency. In: Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, pp. 999–1007 (2010)

    Google Scholar 

  31. Zhang, M.-L., Zhou, Z.-H.: ML-KNN: a lazy learning approach to multi-label learning. Pattern Recogn. 40(7), 2038–2048 (2007)

    Article  MATH  Google Scholar 

  32. Zhang, M.-L., Zhou, Z.-H.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 26(8), 1819–1837 (2014)

    Article  Google Scholar 

  33. Zhang, Y., Burer, S., Street, W.: Ensemble pruning via semi-definite programming. J. Mach. Learn. Res. 7, 1315–1338 (2006)

    MATH  MathSciNet  Google Scholar 

  34. Zhou, Z.-H.: Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC, Boca Raton (2012)

    Google Scholar 

  35. Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

We want to thank anonymous reviewers for helpful comments. This research was supported by the National Science Foundation of China (61273301).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nan Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Li, N., Jiang, Y., Zhou, ZH. (2015). Multi-label Selective Ensemble. In: Schwenker, F., Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2015. Lecture Notes in Computer Science(), vol 9132. Springer, Cham. https://doi.org/10.1007/978-3-319-20248-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-20248-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-20247-1

  • Online ISBN: 978-3-319-20248-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics