Advertisement

A Two-Stage Conditional Random Field Model Based Framework for Multi-Label Classification

  • Abhiram Kumar Singh
  • C. Chandra Sekhar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10597)

Abstract

Multi-label classification (MLC) deals with the task of assigning an instance to all its relevant classes. This task becomes challenging in the presence of the label dependencies. The MLC methods that assume label independence do not use the dependencies among labels. We present a two-stage framework which improves the performance of MLC by using label dependencies. In the first stage, a standard MLC method is used to get the confidence scores for different labels. A conditional random field (CRF) is used in the second stage that improves the performance of the first-stage MLC by using the label dependencies among labels. An optimization-based framework is used to learn the structure and parameters of the CRF. Experiments show that the proposed model performs better than the state-of-the-art methods for MLC.

Keywords

Label dependence Conditional Random Field Multi-label Classification 

References

  1. 1.
    Zhang, M.-L., Zhou, Z.-H.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 26(8), 1819–1837 (2014)CrossRefGoogle Scholar
  2. 2.
    Read, J., Bernhard, F.P., Holmes, G., Frank, E.: Classifier chains for multi-label classification. Mach. Learn. 85(3), 333–359 (2011)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Zhang, M.-L., Zhang, K.: Multi-label learning by exploiting label dependency. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data mining, pp. 999–1008. ACM (2010)Google Scholar
  4. 4.
    Guo, Y., Gu, S.: Multi-label classification using conditional dependency networks. In: IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, p. 1300 (2011)Google Scholar
  5. 5.
    Godbole, S., Sarawagi, S.: Discriminative methods for multi-labeled classification. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS, vol. 3056, pp. 22–30. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-24775-3_5 CrossRefGoogle Scholar
  6. 6.
    Arias, J., Gamez, J.A., Nielsen, T.D., Puerta, J.M.: A scalable pairwise class interaction framework for multidimensional classification. Int. J. Approximate Reasoning 68, 194–210 (2016)CrossRefMATHGoogle Scholar
  7. 7.
    Li, X., Zhao, F., Guo, Y.: Multi-label image classification with a probabilistic label enhancement model. In: Proceedings of Uncertainty in Artificial Intelligence (2014)Google Scholar
  8. 8.
    Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: Proceedings of the 14th ACM International Conference on Information and Knowledge Management, pp. 195–200. ACM (2005)Google Scholar
  9. 9.
    Naeini, M.P., Batal, I., Liu, Z., Hong, C., Hauskrecht, M.: An optimization-based framework to learn conditional random fields for multi-label classification. In: Proceedings of the 2014 SIAM International Conference on Data Mining, pp. 992–1000. Society for Industrial and Applied Mathematics (2014)Google Scholar
  10. 10.
    Zhang, M.-L., Zhou, Z.-H.: ML-KNN: a lazy learning approach to multi-label learning. Pattern Recogn. 40(7), 2038–2048 (2007)CrossRefMATHGoogle Scholar
  11. 11.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning, ICML, vol. 1, pp. 282–289 (2001)Google Scholar
  12. 12.
    Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT press, Cambridge (2012)MATHGoogle Scholar
  13. 13.
    Schmidt, M.W., et al.: Structure learning in random fields for heart motion abnormality detection. In: CVPR, vol. 1(1) (2008)Google Scholar
  14. 14.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 68(1), 49–67 (2006)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Besag, J.: Efficiency of pseudolikelihood estimation for simple Gaussian fields. Biometrika 64(3), 616–618 (1977)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Schmidt, M.W., Van Den Berg, E., Friedlander, M.P., Murphy, K.P.: Optimizing costly functions with simple constraints: a limited-memory projected quasi-Newton Algorithm. In: AISTATS, vol. 5 (2009)Google Scholar
  17. 17.
    Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Prog. 45(1), 503–528 (1989)CrossRefMATHMathSciNetGoogle Scholar
  18. 18.
    Tsoumakas, G., Spyromitros-Xioufis, E., Vilcek, J., Mulan, I.V.: A Java library for multi-label learning. J. Mach. Learn. Res. 12, 2411–2414 (2011)MATHMathSciNetGoogle Scholar
  19. 19.
    Schmidt, M.: UGM: a Matlab toolbox for probabilistic undirected graphical models (2007). http://www.cs.ubc.ca/~schmidtm/Software/UGM.html

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringIndian Institute of Technology MadrasChennaiIndia

Personalised recommendations