Advertisement

Latent Topic-Aware Multi-label Classification

Conference paper
  • 508 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12359)

Abstract

In real-world applications, data are often associated with different labels. Although most extant multi-label learning algorithms consider the label correlations, they rarely consider the topic information hidden in the labels, where each topic is a group of related labels and different topics have different groups of labels. In our study, we assume that there exists a common feature representation for labels in each topic. Then, feature-label correlation can be exploited in the latent topic space. This paper shows that the sample and feature exaction, which are two important procedures for removing noisy and redundant information encoded in training samples in both sample and feature perspectives, can be effectively and efficiently performed in the latent topic space by considering topic-based feature-label correlation. Empirical studies on several benchmarks demonstrate the effectiveness and efficiency of the proposed topic-aware framework.

Keywords

Multi-label learning Sample and feature extraction Feature-label correlation Topic 

References

  1. 1.
    Bhatia, K., Jain, H., Kar, P., Varma, M., Jain, P.: Sparse local embeddings for extreme multi-label classification. In: Advances in Neural Information Processing Systems, pp. 730–738 (2015)Google Scholar
  2. 2.
    Cai, X., Nie, F., Huang, H.: Exact top-k feature selection via l2, 0-norm constraint. In: 23rd International Joint Conference on Artificial Intelligence (2013)Google Scholar
  3. 3.
    Chang, X., Nie, F., Yang, Y., Huang, H.: A convex formulation for semi-supervised multi-label feature selection. In: 28th AAAI Conference on Artificial Intelligence (2014)Google Scholar
  4. 4.
    Duygulu, P., Barnard, K., de Freitas, J.F.G., Forsyth, D.A.: Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-47979-1_7CrossRefGoogle Scholar
  5. 5.
    Elhamifar, E., Sapiro, G., Sastry, S.S.: Dissimilarity-based sparse subset selection. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2182–2197 (2016).  https://doi.org/10.1109/TPAMI.2015.2511748CrossRefGoogle Scholar
  6. 6.
    Gibaja, E., Ventura, S.: A tutorial on multilabel learning. ACM Comput. Surv. (CSUR) 47(3), 52 (2015)CrossRefGoogle Scholar
  7. 7.
    Guo, Y., Xue, W.: Probabilistic multi-label classification with sparse feature learning. In: IJCAI, pp. 1373–1379 (2013)Google Scholar
  8. 8.
    Hoyer, P.O.: Modeling receptive fields with non-negative sparse coding. Neurocomputing 52, 547–552 (2003)CrossRefGoogle Scholar
  9. 9.
    Huang, J., Li, G., Huang, Q., Wu, X.: Learning label-specific features and class-dependent labels for multi-label classification. IEEE Trans. Knowl. Data Eng. 28(12), 3309–3323 (2016)CrossRefGoogle Scholar
  10. 10.
    Huang, J., Li, G., Huang, Q., Wu, X.: Joint feature selection and classification for multilabel learning. IEEE Trans. Cybern. 48(3), 876–889 (2017)CrossRefGoogle Scholar
  11. 11.
    Huang, S.J., Zhou, Z.H.: Multi-label learning by exploiting label correlations locally. In: 26th AAAI Conference on Artificial Intelligence (2012)Google Scholar
  12. 12.
    Jian, L., Li, J., Shu, K., Liu, H.: Multi-label informed feature selection. In: IJCAI, pp. 1627–1633 (2016)Google Scholar
  13. 13.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788 (1999)CrossRefGoogle Scholar
  14. 14.
    Liu, H., Li, X., Zhang, S.: Learning instance correlation functions for multilabel classification. IEEE Trans. Cybern. 47(2), 499–510 (2017)CrossRefGoogle Scholar
  15. 15.
    Liu, H., Zhang, S., Wu, X.: Mlslr: multilabel learning via sparse logistic regression. Inf. Sci. 281, 310–320 (2014)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Liu, W., Tsang, I.W.: Large margin metric learning for multi-label prediction. In: 29th AAAI Conference on Artificial Intelligence (2015)Google Scholar
  17. 17.
    Ma, J., Zhang, H., Chow, T.W.S.: Multilabel classification with label-specific features and classifiers: a coarse- and fine-tuned framework. IEEE Trans. Cybern. 1–15 (2019).  https://doi.org/10.1109/TCYB.2019.2932439
  18. 18.
    Ma, J., Chow, T.W.: Topic-based algorithm for multilabel learning with missing labels. IEEE Trans. Neural Netw. Learn. Syst. 30(7), 2138–2152 (2018) CrossRefGoogle Scholar
  19. 19.
    Ma, J., Chow, T.W., Zhang, H.: Semantic-gap-oriented feature selection and classifier construction in multilabel learning. IEEE Trans. Cybern. 1–15 (2020)Google Scholar
  20. 20.
    Nie, F., Huang, H., Cai, X., Ding, C.H.: Efficient and robust feature selection via joint \(l_{2,1}\)-norms minimization. In: Advances in Neural Information Processing Systems, pp. 1813–1821 (2010)Google Scholar
  21. 21.
    Pang, T., Nie, F., Han, J., Li, X.: Efficient feature selection via \(l_\{2,0\}\)-norm constrained sparse regression. IEEE Trans. Knowle. Data Eng. 31(5), 880–893 (2018)CrossRefGoogle Scholar
  22. 22.
    Ren, J., et al.: Learning hybrid representation by robust dictionary learning in factorized compressed space. IEEE Trans. Image Process. 29, 3941–3956 (2020)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Shen, X., Liu, W., Tsang, I.W., Sun, Q.S., Ong, Y.S.: Multilabel prediction via cross-view search. IEEE Trans. Neural Netw. Learn. Syst. 29(9), 4324–4338 (2017)CrossRefGoogle Scholar
  24. 24.
    Tsoumakas, G., Katakis, I.: Multi-label classification: an overview. Int. J. Data Warehouse. Min. 3(3), 1–13 (2006)Google Scholar
  25. 25.
    Wei, K., Iyer, R., Bilmes, J.: Submodularity in data subset selection and active learning. In: International Conference on Machine Learning, pp. 1954–1963 (2015)Google Scholar
  26. 26.
    Zhang, H., Sun, Y., Zhao, M., Chow, T.W., Wu, Q.J.: Bridging user interest to item content for recommender systems: an optimization model. IEEE Trans. Cybern. 50, 4268–4280 (2019)CrossRefGoogle Scholar
  27. 27.
    Zhang, M.L., Zhou, Z.H.: ML-KNN: a lazy learning approach to multi-label learning. Pattern Recogn. 40(7), 2038–2048 (2007)CrossRefGoogle Scholar
  28. 28.
    Zhang, M.L., Zhou, Z.H.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 26(8), 1819–1837 (2014)CrossRefGoogle Scholar
  29. 29.
    Zhang, R., Nie, F., Li, X.: Self-weighted supervised discriminative feature selection. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3913–3918 (2018)CrossRefGoogle Scholar
  30. 30.
    Zhang, Z., et al.: Jointly learning structured analysis discriminative dictionary and analysis multiclass classifier. IEEE Trans. Neural Netw. Learn. Syst. 29(8), 3798–3814 (2017)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Zhang, Z., Zhang, Y., Liu, G., Tang, J., Yan, S., Wang, M.: Joint label prediction based semi-supervised adaptive concept factorization for robust data representation. IEEE Trans. Knowl. Data Eng. 32(5), 952–970 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.City University of Hong KongHong KongChina
  2. 2.The Hong Kong University of Science and TechnologyHong KongChina

Personalised recommendations