Skip to main content

Combining Discrete and Neural Features for Sequence Labeling

  • Conference paper
  • First Online:
Computational Linguistics and Intelligent Text Processing (CICLing 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9623))

Abstract

Neural network models have recently received heated research attention in the natural language processing community. Compared with traditional models with discrete features, neural models have two main advantages. First, they take low-dimensional, real-valued embedding vectors as inputs, which can be trained over large raw data, thereby addressing the issue of feature sparsity in discrete models. Second, deep neural networks can be used to automatically combine input features, and including non-local features that capture semantic patterns that cannot be expressed using discrete indicator features. As a result, neural network models have achieved competitive accuracies compared with the best discrete models for a range of NLP tasks.

On the other hand, manual feature templates have been carefully investigated for most NLP tasks over decades and typically cover the most useful indicator pattern for solving the problems. Such information can be complementary the features automatically induced from neural networks, and therefore combining discrete and neural features can potentially lead to better accuracy compared with models that leverage discrete or neural features only.

In this paper, we systematically investigate the effect of discrete and neural feature combination for a range of fundamental NLP tasks based on sequence labeling, including word segmentation, POS tagging and named entity recognition for Chinese and English, respectively. Our results on standard benchmarks show that state-of-the-art neural models can give accuracies comparable to the best discrete models in the literature for most tasks and combing discrete and neural features unanimously yield better results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://catalog.ldc.upenn.edu/LDC2011T13.

  2. 2.

    http://www.sighan.org/bakeoff2005.

References

  1. Socher, R., Lin, C.C., Manning, C., Ng, A.Y.: Parsing natural scenes and natural language with recursive neural networks. In: ICML, pp. 129–136 (2011)

    Google Scholar 

  2. Chen, D., Manning, C.D.: A fast and accurate dependency parser using neural networks. In: EMNLP, vol. 1, pp. 740–750 (2014)

    Google Scholar 

  3. Weiss, D., Alberti, C., Collins, M., Petrov, S.: Structured training for neural network transition-based parsing. In: ACL-IJCNLP, pp. 323–333 (2015)

    Google Scholar 

  4. Dyer, C., Ballesteros, M., Ling, W., Matthews, A., Smith, N.A.: Transition-based dependency parsing with stack long short-term memory. In: ACL-IJCNLP, pp. 334–343 (2015)

    Google Scholar 

  5. Zhou, H., Zhang, Y., Chen, J.: A neural probabilistic structured-prediction model for transition-based dependency parsing. In: ACL, pp. 1213–1222 (2015)

    Google Scholar 

  6. Durrett, G., Klein, D.: Neural CRF parsing. In: ACL-IJCNLP, pp. 302–312 (2015)

    Google Scholar 

  7. Ballesteros, M., Carreras, X.: Transition-based spinal parsing. In: CoNLL (2015)

    Google Scholar 

  8. Kalchbrenner, N., Blunsom, P.: Recurrent continuous translation models. In: EMNLP, pp. 1700–1709 (2013)

    Google Scholar 

  9. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  10. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)

    Google Scholar 

  11. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2015)

  12. Ling, W., Trancoso, I., Dyer, C., Black, A.W.: Character-based neural machine translation. arXiv preprint arXiv:1511.04586 (2015)

  13. Jean, S., Cho, K., Memisevic, R., Bengio, Y.: On using very large target vocabulary for neural machine translation. In: ACL-IJCNLP, pp. 1–10 (2015)

    Google Scholar 

  14. Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: EMNLP, vol. 1631, p. 1642 (2013)

    Google Scholar 

  15. Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., Qin, B.: Learning sentiment-specific word embedding for twitter sentiment classification. In: ACL, vol. 1, pp. 1555–1565 (2014)

    Google Scholar 

  16. Nogueira dos Santos, C., Gatti, M.: Deep convolutional neural networks for sentiment analysis of short texts. In: COLING (2014)

    Google Scholar 

  17. Vo, D.-T., Zhang, Y.: Target-dependent twitter sentiment classification with rich automatic features. In: IJCAI, pp. 1347–1353 (2015)

    Google Scholar 

  18. Zhang, M., Zhang, Y., Vo, D.-T.: Neural networks for open domain targeted sentiment. In: EMNLP (2015)

    Google Scholar 

  19. Socher, R., Chen, D., Manning, C.D., Ng, A.: Reasoning with neural tensor networks for knowledge base completion. In: NIPS, pp. 926–934 (2013)

    Google Scholar 

  20. Wang, M., Manning, C.D.: Effect of non-linear deep architecture in sequence labeling. In: IJCNLP (2013)

    Google Scholar 

  21. Ding, X., Zhang, Y., Liu, T., Duan, J.: Deep learning for event-driven stock prediction. In: ICJAI, pp. 2327–2333 (2015)

    Google Scholar 

  22. Mark, G.-W., Stephen, C.: What happens next? event prediction using a compositional neural network. In: AAAI (2016)

    Google Scholar 

  23. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  24. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 12, pp. 1532–1543 (2014)

    Google Scholar 

  25. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. JMLR 12, 2493–2537 (2011)

    Google Scholar 

  26. Turian, J., Ratinov, L., Bengio, Y.: Word representations: a simple and general method for semi-supervised learning. In: ACL, pp. 384–394 (2010)

    Google Scholar 

  27. Lafferty, J., McCallum, A., Pereira, F.C.N.: Probabilistic models for segmenting and labeling sequence data, Conditional random fields (2001)

    Google Scholar 

  28. Guo, J., Che, W., Wang, H., Liu, T.: Revisiting embedding features for simple semi-supervised learning. In: EMNLP, pp. 110–120 (2014)

    Google Scholar 

  29. Ma, J., Zhang, Y., Zhu, J.: Tagging the web: building a robust web tagger with neural network. In: ACL, vol. 1, pp. 144–154 (2014)

    Google Scholar 

  30. Wang, M., Manning, C.D.: Learning a product of experts with elitist lasso. In: IJCNLP (2013)

    Google Scholar 

  31. Zhang, M., Zhang, Y.: Combining discrete and continuous features for deterministic transition-based dependency parsing. In: EMNLP, pp. 1316–1321 (2015)

    Google Scholar 

  32. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  33. Xue, N., et al.: Chinese word segmentation as character tagging. Comput. Linguist. Chin. Lang. Process. 8(1), 29–48 (2003)

    MathSciNet  Google Scholar 

  34. Peng, F., Feng, F., McCallum, A.: Chinese segmentation and new word detection using conditional random fields. In: Coling, p. 562 (2004)

    Google Scholar 

  35. Zhao, H.: Character-level dependencies in Chinese: usefulness and learning. In: EACL, pp. 879–887 (2009)

    Google Scholar 

  36. Jiang, W., Huang, L., Liu, Q., Lü, Y.: A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In: ACL (2008)

    Google Scholar 

  37. Sun, W.: A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging. In: HLT-ACL, pp. 1385–1394 (2011)

    Google Scholar 

  38. Liu, Y., Zhang, Y., Che, W., Liu, T., Wu, F.: Domain adaptation for CRF-based chinese word segmentation using free annotations. In: EMNLP, pp. 864–874 (2014)

    Google Scholar 

  39. Zheng, X., Chen, H., Xu, T.: Deep learning for Chinese word segmentation and pos tagging. In: EMNLP, pp. 647–657 (2013)

    Google Scholar 

  40. Pei, W., Ge, T., Baobao, C.: Maxmargin tensor neural network for Chinese word segmentation. In: ACL (2014)

    Google Scholar 

  41. Chen, X., Qiu, X., Zhu, C., Huang, X.: Gated recursive neural network for Chinese word segmentation. In: EMNLP (2015)

    Google Scholar 

  42. Zhang, Y., Clark, S.: Chinese segmentation with a word-based perceptron algorithm. In: ACL, vol. 45, p. 840 (2007)

    Google Scholar 

  43. Sun, W.: Word-based and character-based word segmentation models: comparison and combination. In: Coling, pp. 1211–1219 (2010)

    Google Scholar 

  44. Liu, Y., Zhang, Y.: Unsupervised domain adaptation for joint segmentation and pos-tagging. In: COLING (Posters), pp. 745–754 (2012)

    Google Scholar 

  45. Ratnaparkhi, A., et al.: A maximum entropy model for part-of-speech tagging. In: EMNLP, vol. 1, pp. 133–142 (1996)

    Google Scholar 

  46. Collins, M.: Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In: EMNLP, pp. 1–8 (2002)

    Google Scholar 

  47. Manning, C.D.: Part-of-speech tagging from 97% to 100%: is it time for some linguistics? In: Gelbukh, A.F. (ed.) CICLing 2011. LNCS, vol. 6608, pp. 171–189. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19400-9_14

    Chapter  Google Scholar 

  48. Santos, C.D., Zadrozny, B.: Learning character-level representations for part-of-speech tagging. In: ICML, pp. 1818–1826 (2014)

    Google Scholar 

  49. Perez-Ortiz, J.A., Forcada, M.L.: Part-of-speech tagging with recurrent neural networks. Universitat d’Alacant, Spain (2001)

    Google Scholar 

  50. Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging, August 2015

    Google Scholar 

  51. McCallum, A., Li, W.: Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In: HLT-NAACL, pp. 188–191 (2003)

    Google Scholar 

  52. Leong, C.H., Tou, N.H.: Named entity recognition with a maximum entropy approach. In: HLT-NAACL, vol. 4, pp. 160–163 (2003)

    Google Scholar 

  53. Krishnan, V., Manning, C.D.: An effective two-stage model for exploiting non-local dependencies in named entity recognition. In: Coling and ACL, pp. 1121–1128 (2006)

    Google Scholar 

  54. Che, W., Wang, M., Manning, C.D., Liu, T.: Named entity recognition with bilingual constraints. In: HLT-NAACL, pp. 52–62 (2013)

    Google Scholar 

  55. Ratinov, L., Roth, D.: Design challenges and misconceptions in named entity recognition. In: Coling, pp. 147–155 (2009)

    Google Scholar 

  56. dos Santos, C., Guimaraes, V., Niterói, R.J., de Janeiro, R.: Boosting named entity recognition with neural character embeddings. In: NEWS (2015)

    Google Scholar 

  57. Hammerton, J.: Named entity recognition with long short-term memory. In: Daelemans, W., Osborne, M. (eds.) CoNLL, pp. 172–175 (2003)

    Google Scholar 

  58. Chiu, J.P.C., Nichols, E.: Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308 (2015)

  59. Singer, Y., Duchi, J., Hazan, E.: Adaptive subgradient methods for online learning and stochastic optimization. JMLR 12, 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  60. Zhang, Y., Clark, S.: Syntactic processing using the generalized perceptron and beam search. Comput. Linguist. 37(1), 105–151 (2011)

    Article  Google Scholar 

  61. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. JMLR 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  62. Chen, X., Qiu, X., Zhu, C., Liu, P., Huang, X.: Long short-term memory neural networks for Chinese word segmentation. In: EMNLP, Lisbon, Portugal, pp. 1197–1206 (2015)

    Google Scholar 

  63. Zhang, M., Zhang, Y., Che, W., Liu, T.: Character-level chinese dependency parsing. In: ACL (2014)

    Google Scholar 

  64. Toutanova, K., Klein, D., Manning, C.D., Singer, Y.: Feature-rich part-of-speech tagging with a cyclic dependency network. In: NAACL, pp. 173–180 (2003)

    Google Scholar 

  65. Li, Z., Chao, J., Zhang, M., Chen, W.: Coupled sequence labeling on heterogeneous annotations: Pos tagging as a case study. In: ACL-IJCNLP, pp. 1783–1792 (2015)

    Google Scholar 

Download references

Acknowledgments

We would like to thank the anonymous reviewers for their detailed comments. This work is supported by the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, J., Teng, Z., Zhang, M., Zhang, Y. (2018). Combining Discrete and Neural Features for Sequence Labeling. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2016. Lecture Notes in Computer Science(), vol 9623. Springer, Cham. https://doi.org/10.1007/978-3-319-75477-2_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-75477-2_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-75476-5

  • Online ISBN: 978-3-319-75477-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics