Active Hidden Markov Models for Information Extraction
Information extraction from HTML documents requires a classifier capable of assigning semantic labels to the words or word sequences to be extracted. If completely labeled documents are available for training, well-known Markov model techniques can be used to learn such classifiers. In this paper, we consider the more challenging task of learning hidden Markov models (HMMs) when only partially (sparsely) labeled documents are available for training. We first give detailed account of the task and its appropriate loss function, and show how it can be minimized given an HMM. We describe an EM style algorithm for learning HMMs from partially labeled data. We then present an active learning algorithm that selects “difficult” unlabeled tokens and asks the user to label them. We study empirically by how much active learning reduces the required data labeling effort, or increases the quality of the learned model achievable with a given amount of user effort.
KeywordsHide Markov Model Information Extraction Observation Sequence Small Margin Semantic Label
Unable to display preview. Download preview PDF.
- 1.T. Berners-Lee. Semantic web road map. Internal note, World Wide Web Consortium, 1998.Google Scholar
- 2.T. Brants. Cascaded markov models. In Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics, 1999.Google Scholar
- 5.L. Eikvil. Information extraction from the world wide web: a survey. Technical Report 945, Norwegian Computing Center, 1999.Google Scholar
- 7.Ralph Grishman and Beth Sundheim. Message understanding conference-6: A brief history. In Proceedings of the International Conference on Computational Linguistics, 1996.Google Scholar
- 8.Thomas Hofmann and Joachim M. Buhmann. Active data clustering. In Advances in Neural Information Processing Systems, volume 10, 1998.Google Scholar
- 9.N. Hsu and M. Dung. Generating finite-state transducers for semistructured data extraction from the web. Journal of Information Systems, Special Issue on Semistructured Data, 23(8), 1998.Google Scholar
- 10.Anders Krogh and Jesper Vedelsby. Neural network ensembles, cross validation, and active learning. In Advances in Neural Information Processing Systems, volume 7,pages 231–238, 1995.Google Scholar
- 12.Andrew McCallum, Dayne Freitag, and Fernando Pereira. Maximum entropy Markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000.Google Scholar
- 14.T. Scheffer, S. Hoche, and S. Wrobel. Learning hidden markov models for information extraction actively from partially labeled text. Technical report, University of Magdeburg, 2001.Google Scholar
- 15.Kristie Seymore, Andrew McCallum, and Roni Rosenfeld. Learning hidden markov model structure for information extraction. In AAAI’99 Workshop on Machine Learning for Information Extraction, 1999.Google Scholar
- 16.V. Vapnik. Statistical Learning Theory. Wiley, 1998.Google Scholar