Evidential Multi-label Classification Using the Random k-Label Sets Approach

Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 164)

Abstract

Multi-label classification deals with problems in which each instance can be associated with a set of labels. An effective multi-label method, named RAkEL, randomly breaks the initial set of labels into smaller sets and trains a single-label classifier in each of this subset. To classify an unseen instance, the predictions of all classifiers are combined using a voting process. In this paper, we adapt the RAkEL approach under the belief function framework applied to set-valued variables. Using evidence theory makes us able to handle lack of information by associating a mass function to each classifier and combining them conjunctively. Experiments on real datasets demonstrate that our approach improves classification performances.

Keywords

Linear Discriminant Analysis Belief Function Evidence Theory True Label Vote Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boutell, M.R., Shen, J., Brown, C.M.: Learning multi-label scene classification. Pattern Recognition 37(9), 1757–1771 (2004)CrossRefGoogle Scholar
  2. 2.
    Denœux, T., Masson, M.-H.: Evidential reasoning in large partially ordered sets. Application to multi-label classification, ensemble clustering and preference aggregation. Annals of Operations Research (2011) (accepted for publication), doi:10.1007/s10479-011-0887-2Google Scholar
  3. 3.
    Denoeux, T., Younes, Z., Abdallah, F.: Representing uncertainty on set-valued variables using belief functions. Artificial Intelligence 174, 479–499 (2010)MathSciNetMATHCrossRefGoogle Scholar
  4. 4.
    Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: 14th ACM International Conference on Information and Knowledge Management (2005)Google Scholar
  5. 5.
    Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier chains for multi-label classification. In: Proc. of the 20th European Conference on Machine Learning, ECML 2009 (2009)Google Scholar
  6. 6.
    Schapire, R., Singer, Y.: Boostexter: a boosting-based system for text categorization. Machine Learning 39, 135–168 (2000)MATHCrossRefGoogle Scholar
  7. 7.
    Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multilabel classification of music into emotions. In: Proc. 9th International Conference on Music Information Retrieval (ISMIR 2008), pp. 325–330 (2008)Google Scholar
  8. 8.
    Tsoumakas, G., Katakis, I.: Multi-label classification: An overview. International Journal of Data Warehousing and Mining 3(3), 1–13 (2007)CrossRefGoogle Scholar
  9. 9.
    Tsoumakas, G., Vlahavas, I.: Random k-labelsets: An ensemble method for multilabel classification. In: Proc. 18th European Conference on Machine Learning, September 17-21 (2007)Google Scholar
  10. 10.
    Younes, Z., Abdallah, F., Denoeux, T., Snoussi, H.: A dependent multilabel classification method derived from the k-nearest neighbor rule. EURASIP Journal on Advances in Signal Processing, Article ID 645964, 14 (2011), doi:10.1155/2011/645964Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.CNRS, UMR 7253 HeudiasycUniversité de Technologie de CompiègneCompiègneFrance

Personalised recommendations