Nested Named Entity Recognition Using Multilayer Recurrent Neural Networks

  • Truong-Son NguyenEmail author
  • Le-Minh Nguyen
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 781)


Many named entities are embedded in others, but current models just only focus on recognizing entities at the top-level. In this paper, we proposed two approaches for the nested named entity recognition task by modeling this task as the multilayer sequence labeling task. Firstly, we propose a model that integrates linguistic features with a neural network to improve the performance of named entity recognition (NER) systems, then we recognize nested named entities by using a sequence of those models in which each model is responsible for predicting named entities at each layer. This approach seems to be inconvenient because we need to train many single models to predict nested named entities. In the second approach, we proposed a novel model, called multilayer recurrent neural networks, to recognize all nested entities at the same time. Experimental results on the Vietnamese data set show that the proposed models outperform previous approaches. Our model yields the state of the art results for Vietnamese with F1 scores of 92.97% at top-level and 74.74% at the nested level. For English, our NER systems also produce better performance.


Nested named entity recognition Deep learning Recurrent neural networks 



This work was supported by JSPS KAKENHI Grant number JP15K16048.


  1. 1.
    Joint Named Entity Recognition and Disambiguation, September 2015.
  2. 2.
    Borthwick, A.: A maximum entropy approach to named entity recognition. Ph.D. thesis, Citeseer (1999)Google Scholar
  3. 3.
    Chieu, H.L., Ng, H.T.: Named entity recognition with a maximum entropy approach. In: Daelemans, W., Osborne, M. (eds.) Proceedings of CoNLL-2003, Edmonton, Canada, pp. 160–163 (2003)Google Scholar
  4. 4.
    Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)CrossRefGoogle Scholar
  5. 5.
    Finkel, J.R., Manning, C.D.: Nested named entity recognition. In: EMNLP 2009, pp. 141–150. Association for Computational Linguistics (2009)Google Scholar
  6. 6.
    Florian, R., Ittycheriah, A., Jing, H., Zhang, T.: Named entity recognition through classifier combination. In: Daelemans, W., Osborne, M. (eds.) Proceedings of CoNLL-2003, Edmonton, Canada, pp. 168–171 (2003)Google Scholar
  7. 7.
    Graves, A., Mohamed, A.R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE (2013)Google Scholar
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
  10. 10.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: ICML, vol. 1, pp. 282–289 (2001)Google Scholar
  11. 11.
    Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
  12. 12.
    Le-Hong, P.: Vietnamese named entity recognition using token regular expressions and bidirectional inference. arXiv preprint arXiv:1610.05652 (2016)
  13. 13.
    Le-Hong, P., Roussanaly, A., Nguyen, T.H., Rossignol, M.: An empirical study of maximum entropy approach for part-of-speech tagging of Vietnamese texts. In: Traitement Automatique des Langues Naturelles-TALN 2010, p. 12 (2010)Google Scholar
  14. 14.
    Ling, W., Chu-Cheng, L., Tsvetkov, Y., Amir, S.: Not all contexts are created equal: better word representations with variable attention (2015)Google Scholar
  15. 15.
    Ling, W., Dyer, C., Black, A., Trancoso, I.: Two/too simple adaptations of word2vec for syntax problems. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics (2015)Google Scholar
  16. 16.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  17. 17.
    Nguyen, T.S., Tran, X., Chien, N.L.M.: Vietnamese named entity recognition @ VLSP 2016 evaluation campaign. In: The Fourth International Workshop on Vietnamese Language and Speech Processing, pp. 18–23 (2016)Google Scholar
  18. 18.
    Nguyen, T.M.H., Vu, X.L.: Report on named-entity recognition evaluation campaign: data and systems. In: The Fourth International Workshop on Vietnamese Language and Speech Processing, pp. 1–5 (2016)Google Scholar
  19. 19.
    Ohta, T., Tateisi, Y., Kim, J.D.: The GENIA corpus: an annotated research abstract corpus in molecular biology domain. In: Proceedings of the Second International Conference on Human Language Technology Research, pp. 82–86. Morgan Kaufmann Publishers Inc. (2002)Google Scholar
  20. 20.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Thao, P.T.X., Tri, T.Q., Dien, D., Collier, N.: Named entity recognition in Vietnamese using classifier voting. ACM Trans. Asian Lang. Inf. Process. (TALIP) 6(4), 3 (2007)Google Scholar
  22. 22.
    Sang, E.F.T.K., Buchholz, S.: Introduction to the CoNLL-2000 shared task: chunking. In: Proceedings of the 2nd Workshop on Learning Language in Logic and the 4th Conference on Computational Natural Language Learning, vol. 7, pp. 127–132. Association for Computational Linguistics (2000)Google Scholar
  23. 23.
    Sang, E.F.T.K, De Meulder, F.: Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In: Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, vol. 4, pp. 142–147. Association for Computational Linguistics (2003)Google Scholar
  24. 24.
    Wang, P., Qian, Y., Soong, F., He, L., Zhao, H.: A unified tagging solution: bidirectional LSTM recurrent neural network with word embedding. arXiv preprint arXiv:1511.00215 (2015)
  25. 25.
    Zhou, G., Su, J.: Named entity recognition using an HMM-based chunk tagger. In: proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 473–480. Association for Computational Linguistics (2002)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Japan Advanced Institute of Science and TechnologyNomiJapan
  2. 2.University of Science, VNU-HCMCHo Chi Minh CityVietnam

Personalised recommendations