Chapter

Finite-State Methods and Natural Language Processing

Volume 4002 of the series Lecture Notes in Computer Science pp 97-109

Algorithms for Minimum Risk Chunking

  • Martin JanscheAffiliated withCenter for Computational Learning Systems, Columbia University

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Stochastic finite automata are useful for identifying substrings (chunks) within larger units of text. Relevant applications include tokenization, base-NP chunking, named entity recognition, and other information extraction tasks. For a given input string, a stochastic automaton represents a probability distribution over strings of labels encoding the location of chunks. For chunking and extraction tasks, the quality of predictions is evaluated in terms of precision and recall of the chunked/extracted phrases when compared against some gold standard. However, traditional methods for estimating the parameters of a stochastic finite automaton and for decoding the best hypothesis do not pay attention to the evaluation criterion, which we take to be the well-known F-measure. We are interested in methods that remedy this situation, both in training and decoding. Our main result is a novel algorithm for efficiently evaluating expected F-measure. We present the algorithm and discuss its applications for utility/ risk-based parameter estimation and decoding.