Weighted Multi-label Learning with Rank Preservation
As one of the central topic in the field of machine learning, multi-label learning gets widely applied in real life. The classical algorithm does not consider the relation of rank and weight between labels simultaneously, while correlation between labels own a certain impact on the quality of classification models, which makes the algorithm unable to be applied in some scenarios and the accuracy of the model is affected. To solve this problem, a new algorithm named weighted multi-label learning with rank preservation (abbrev. WMR) is proposed. WMR extends and optimizes the SVM-based multi-label learning algorithm by introducing two kinds of label pairs, which is called “related-unrelated” and “related-related” label pairs, to measure the rank and weight between labels. The experiment is based on the real datasets and compared to the RankSVM algorithm, and the experimental results show that WMR mines the correlation between labels fully and improve the quality of the classification model effectively.
KeywordsMulti-label learning Correlation between labels Weight Rank
This work is supported by the National Fund Major Project (17ZDA166), the Central University Basic Research Business Expenses Special Fund Project (CZY18015).
- 1.Maron, O.: Learning from ambiguity. Ph.D. dissertation, Department of Electrical and Computer Science, MIT, Cambridge, MA, June 1998Google Scholar
- 3.Read, J., Pfahringer, B., Holmes, G.: Multi-label classification using ensembles of pruned sets. In: ICDM 2008, 8th IEEE International Conference on Data Mining, Pisa, Italy, pp. 995–1000 (2008)Google Scholar
- 7.Chen, W., Yan, J., Zhang, B., Chen, Z., Yang, Q.: Document transformation for multi-label feature selection in text categorization. In: 7th IEEE International Conference on Data Mining, pp. 451–456 (2007)Google Scholar
- 10.Read, J.: A pruned problem transformation method for multi-label classification. In: Proceedings of 2008 New Zealand Computer Science Research Student Conference, pp. 143–150 (2008)Google Scholar
- 12.Elisseeff, A., Weston, J.: Kernel methods for multi-labelled classification and categorical regression problems, BIOwulf Technologies, Technical report (2001)Google Scholar
- 13.McCallum, A.: Multi-label text classification with a mixture model trained by EM. In: AAAI 1999 Workshop on Text Learning, pp. 1–7 (1999)Google Scholar
- 14.Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pp. 195–200 (2005)Google Scholar
- 16.Guo, Y., Gu, S.: Multi-label classification using conditional dependency networks. In: IJCAI 2011: 24th International Conference on Artificial Intelligence, pp. 1300–1305. IJCAI/AAAI (2011)Google Scholar
- 17.Berger, M.J.: Large scale multi-label text classification with semantic word vectors, pp. 1–8. Technical report, Stanford, CA 94305 (2014)Google Scholar
- 18.Kurata, G., Xiang, B., Zhou, B.: Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence, San Diego, California, 12–17 June 2016, pp. 521–526. Association for Computational Linguistics (2016)Google Scholar
- 19.Chen, G., Ye, D., Xing, Z., Chen, J., Cambria, E.: Ensemble application of convolutional and recurrent neural networks for multi-label text categorization. https://doi.org/10.1109/IJCNN.2017.7966144