A Simple and Efficient Model Pruning Method for Conditional Random Fields

  • Hai Zhao
  • Chunyu Kit
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5459)


Conditional random fields (CRFs) have been quite successful in various machine learning tasks. However, as larger and larger data become acceptable for the current computational machines, trained CRFs Models for a real application quickly inflate. Recently, researchers often have to use models with tens of millions features. This paper considers pruning an existing CRFs model for storage reduction and decoding speedup. We propose a simple but efficient rank metric for feature group rather than features that previous work usually focus on. A series of experiments in two typical labeling tasks, word segmentation and named entity recognition for Chinese, are carried out to check the effectiveness of the proposed method. The results are quite positive and show that CRFs models are highly redundant, even using carefully selected label set and feature templates.


Conditional Random Fields Model Pruning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lafferty, J.D., McCallum, A., Pereira, F.C.N.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML 2001: Proceedings of the Eighteenth International Conference on Machine Learning, San Francisco, CA, USA, pp. 282–289 (2001)Google Scholar
  2. 2.
    Rosenfeld, B., Feldman, R., Fresko, M.: A systematic cross-comparison of sequence classifiers. In: SDM 2006, Bethesda, Maryland, pp. 563–567 (2006)Google Scholar
  3. 3.
    Sha, F., Pereira, F.: Shallow parsing with conditional random fields. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Edmonton, Canada, vol. 1, pp. 134–141 (2003)Google Scholar
  4. 4.
    McCallum, A.: Efficiently inducing features of conditional random fields. In: Proceedings of the 19th Conference in Uncertainty in Articifical Intelligence (UAI 2003), Acapulco, Mexico, August 7-10 (2003)Google Scholar
  5. 5.
    Qi, Y., Szummer, M., Minka, T.P.: Diagram structure recognition by bayesian conditional random fields. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, June 20-25, 2005, pp. 191–196 (2005)Google Scholar
  6. 6.
    Liao, L., Choudhury, T., Fox, D., Kautz, H.: Training conditional random fields using virtual evidence boosting. In: The Twentieth International Joint Conferfence on Artificial Intelligence (IJCAI 2007), Hyderabad, India, pp. 2530–2535, January 6-12 (2007)Google Scholar
  7. 7.
    Gutmann, B., Kersting, K.: Stratified gradient boosting for fast training of confiditional random fields. In: Malerba, D., Appice, A., Ceci, M. (eds.) Proceedings of the 6th International Workshop on Multi-Relational Data Mining, Warsaw, Poland, pp. 56–68, September 17 (2007)Google Scholar
  8. 8.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)Google Scholar
  9. 9.
    Peng, F., Feng, F., McCallum, A.: Chinese segmentation and new word detection using conditional random fields. In: COLING 2004, Geneva, Switzerland, pp. 562–568, August 23-27 (2004)Google Scholar
  10. 10.
    Zhao, H., Huang, C.-N., Li, M.: An improved Chinese word segmentation system with conditional random field. In: Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, Sydney, Australia, pp. 162–165, July 22-23 (2006)Google Scholar
  11. 11.
    Zhao, H., Huang, C.-N., Li, M., Lu, B.-L.: Effective tag set selection in Chinese word segmentation via conditional random field modeling. In: Proceedings of the 20th Asian Pacific Conference on Language, Information and Computation, Wuhan, China, pp. 87–94, November 1-3 (2006)Google Scholar
  12. 12.
    Zhao, H., Kit, C.: Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In: The Sixth SIGHAN Workshop on Chinese Language Processing, Hyderabad, India, pp. 106–111, January 11-12 (2008)Google Scholar
  13. 13.
    Zhang, R., Kikui, G., Sumita, E.: Subword-based tagging by conditional random fields for Chinese word segmentation. In: Proceedings of Human Language Technology Conference/North American chapter of the Association for Computational Linguistics annual meeting (HLT/NAACL-2006), New York, pp. 193–196 (2006)Google Scholar
  14. 14.
    Pietra, S.D., Pietra, V.D., Lafferty, J.: Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 380–393 (1997)CrossRefGoogle Scholar
  15. 15.
    Collins, M.: Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In: Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), University of Pennsylvania, Philadelphia, PA, USA, pp. 1–8, July 6-7 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Hai Zhao
    • 1
  • Chunyu Kit
    • 1
  1. 1.Department of Chinese, Translation and LinguisticsCity University of Hong KongHong KongChina

Personalised recommendations