Advertisement

Bidirectional LSTM Joint Model for Intent Classification and Named Entity Recognition in Natural Language Understanding

  • Akson Sam VargheseEmail author
  • Saleha Sarang
  • Vipul Yadav
  • Bharat Karotra
  • Niketa Gandhi
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 941)

Abstract

The aim of this paper is to present a Simple LSTM - Bidirectional LSTM in a joint model framework, for Intent Classification and Named Entity Recognition (NER) tasks. Both the models are approached as a classification task. This paper discuss the comparison of single models and joint models in the respective tasks, a data augmentation algorithm and how the joint model framework helped in learning a poor performing NER model in by adding learned weights from well performing Intent Classification model in their respective tasks. The experiment in the paper shows that there is approximately 44% improvement in performance of NER model when in joint model compared to when tested as independent model.

Keywords

LSTM Joint model Bidirectional LSTM Intent Classification Named Entity Recognition Natural Language Understanding 

References

  1. 1.
    Young, T., Hazarika, D., Poria, S., Cambria, E.: Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 13(3), 55–75 (2018)CrossRefGoogle Scholar
  2. 2.
    Mesnil, G., He, X., Deng, L., Bengio, Y.: Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. In: INTERSPEECH, pp. 3771–3775, August 2013Google Scholar
  3. 3.
    Tang, D., Qin, B., Liu, T.: Document modeling with gated recurrent neural network for sentiment classification. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1422–1432 (2015)Google Scholar
  4. 4.
    Graves, A., Mohamed, A.R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE, May 2013Google Scholar
  5. 5.
    Mikolov, T., Kombrink, S., Burget, L., Černocký, J., Khudanpur, S.: Extensions of recurrent neural network language model. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5528–5531. IEEE, May 2011Google Scholar
  6. 6.
    Zhang, X., Wang, H.: A joint model of intent determination and slot filling for spoken language understanding. In: IJCAI, pp. 2993–2999, July 2016Google Scholar
  7. 7.
    Liu, P., Qiu, X., Huang, X.: Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101 (2016)
  8. 8.
    Ma, M., Zhao, K., Huang, L., Xiang, B., Zhou, B.: Jointly trained sequential labeling and classification by sparse attention neural networks. arXiv preprint arXiv:1709.10191 (2017)
  9. 9.
    Jeong, M., Lee, G.G.: Triangular-chain conditional random fields. IEEE Trans. Audio Speech Lang. Process. 16(7), 1287–1302 (2008)CrossRefGoogle Scholar
  10. 10.
    Xu, P., Sarikaya, R.: Convolutional neural network based triangular CRF for joint intent detection and slot filling. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 78–83. IEEE, December 2013Google Scholar
  11. 11.
    Tafforeau, J., Bechet, F., Artières, T., Favre, B.: Joint syntactic and semantic analysis with a multitask deep learning framework for spoken language understanding. In: INTERSPEECH, pp. 3260–3264 (2016)Google Scholar
  12. 12.
    ATIS Dataset Source, yvchen. https://github.com/yvchen/JointSLU/tree/master/data. Accessed Oct 2018
  13. 13.
    Klein, D., Smarr, J., Nguyen, H., Manning, C.D.: Named entity recognition with character-level models. In: Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, vol. 4, pp. 180–183. Association for Computational Linguistics, May 2003Google Scholar
  14. 14.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM, July 2008Google Scholar
  15. 15.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  16. 16.
    Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: a search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Salehinejad, H., Baarbe, J., Sankar, S., Barfett, J., Colak, E., Valaee, S.: Recent Advances in Recurrent Neural Networks. arXiv preprint arXiv:1801.01078 (2017)
  18. 18.
    Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
  19. 19.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: IJCAI, vol. 14, no. 2, pp. 1137–1145, August 1995Google Scholar
  21. 21.
    Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)
  22. 22.
    Janocha, K., Czarnecki, W.M.: On loss functions for deep neural networks in classification. arXiv preprint arXiv:1702.05659 (2017)
  23. 23.
    Kim, Y., Lee, S., Stratos, K.: ONENET, joint domain, intent, slot prediction for spoken language understanding. In: 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (2017)Google Scholar
  24. 24.
    Liu, B., Lane, I.: Joint online spoken language understanding and language modeling with recurrent neural networks. In: Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 22–30. Association for Computational Linguistics, September 2016Google Scholar
  25. 25.
    Guo, D., Tur, G., Yih, W., Zweig, G.: Joint semantic utterance classification and slot filling with recursive neural networks. In: Spoken Language Technology Workshop (SLT), pp. 554–559. IEEE (2014)Google Scholar
  26. 26.
    Yao, K., Zweig, G., Hwang, M., Shi, Y., Yu, D.: Recurrent neural networks for language understanding. In: INTERSPEECH (2013)Google Scholar
  27. 27.
    Yao, K., Peng, B., Zweig, G., Yu, D., Li, X., Gao F.: Recurrent conditional random fields for language understanding. In: ICASSP (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Akson Sam Varghese
    • 1
    Email author
  • Saleha Sarang
    • 1
  • Vipul Yadav
    • 1
  • Bharat Karotra
    • 1
  • Niketa Gandhi
    • 2
  1. 1.Technology and Research GroupDepasser InfotechMumbaiIndia
  2. 2.Machine Intelligence Research Labs (MIR Labs)Scientific Network for Innovation and Research ExcellenceAuburnUSA

Personalised recommendations