Abstract
The F-measure, originally introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. In this paper, we describe our methods applied in the JRS 2012 Data Mining Competition for topical classification, where the instance-based F-measure is used as the evaluation metric. Optimizing such a measure is a statistically and computationally challenging problem, since no closed-form maximizer exists. However, it has been shown recently that the F-measure maximizer can be efficiently computed if some properties of the label distribution are known. For independent labels, it is enough to know marginal probabilities. An algorithm based on dynamic programming is then able to compute the F-measure maximizer in cubic time with respect to the number of labels. For dependent labels, one needs a quadratic number (with respect to the number of labels) of parameters for the joint distribution to compute (also in cubic time) the F-measure maximizer. These results suggest a two step procedure. First, an algorithm estimating the required parameters of the distribution has to be run. Then, the inference algorithm computing the F-measure maximizer is used over these estimates. Such a procedure achieved a very satisfactory result in the JRS 2012 Data Mining Competition.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
van Rijsbergen, C.J.: Foundation of evaluation. Journal of Documentation 30(4), 365–373 (1974)
Jansche, M.: A maximum expected utility framework for binary sequence labeling. In: ACL 2007, pp. 736–743 (2007)
Lewis, D.: Evaluating and optimizing autonomous text classification systems. In: SIGIR 1995, pp. 246–254 (1995)
Chai, A.: Expectation of F-measures: Tractable exact computation and some empirical observations of its properties. In: SIGIR 2005, pp. 593–594 (2005)
Dembczyński, K., Waegeman, W., Cheng, W., Hüllermeier, E.: An exact algorithm for F-measure maximization. In: NIPS 2011, 223–230 (2011)
Dembczyński, K., Cheng, W., Hüllermeier, E.: Bayes optimal multilabel classification via probabilistic classifier chains. In: ICML 2010, pp. 279–286 (2010)
Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML 2001, pp. 282–289 (2001)
Ghamrawi, N., McCallum, A.: Collective multi-label classification. In: CIKM 2005, pp. 195–200 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cheng, W., Dembczyński, K., Hüllermeier, E., Jaroszewicz, A., Waegeman, W. (2012). F-Measure Maximization in Topical Classification. In: Yao, J., et al. Rough Sets and Current Trends in Computing. RSCTC 2012. Lecture Notes in Computer Science(), vol 7413. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32115-3_52
Download citation
DOI: https://doi.org/10.1007/978-3-642-32115-3_52
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32114-6
Online ISBN: 978-3-642-32115-3
eBook Packages: Computer ScienceComputer Science (R0)