Skip to main content

Incorporating Complete Syntactical Knowledge for Spoken Language Understanding

  • Conference paper
  • First Online:
Knowledge Graph and Semantic Computing: Knowledge Graph Empowers New Infrastructure Construction (CCKS 2021)

Abstract

Spoken Language Processing (SLU) is important in task-oriented dialog systems. Intent detection and slot filling are two significant tasks of SLU. State-of-the-art methods for SLU jointly solve these two tasks in an end-to-end fashion using pre-trained language models like BERT. However, existing methods ignore the syntax knowledge and long-range word dependencies, which are essential supplements for semantic models. In this paper, we utilize the Graph Convolutional Networks (GCNs) and dependency tree to incorporate the syntactical knowledge. Meanwhile, we propose a novel gate mechanism to model the label of the dependency arcs. Therefore, the labels and geometric connection of dependency tree are both encoded. The proposed method can adaptively attach a weight on each dependency arc based on dependency types and word contexts, which avoids encoding redundant features. Extensive experimental results show that our model outperforms strong baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://spacy.io/.

References

  1. Chen, Q., Zhuo, Z., Wang, W.: BERT for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909 (2019)

  2. Coucke, A., et al.: Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190 (2018)

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  4. Niu, P., Chen, Z., Song, M.: A novel bi-directional interrelated model for joint intent detection and slot filling. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 5467–5471 (2019)

    Google Scholar 

  5. Goo, C., et al.: Slot-gated modeling for joint slot filling and intent prediction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, Vol. 2, pp. 753–757 (2018)

    Google Scholar 

  6. Guo, D., Tür, G., Yih, W., Zweig, G.: Joint semantic utterance classification and slot filling with recursive neural networks. In: 2014 IEEE Spoken Language Technology Workshop, SLT 2014, South Lake Tahoe, NV, USA, pp. 554–559 (2014)

    Google Scholar 

  7. Guo, Z., Zhang, Y., Lu, W.: Attention guided graph convolutional networks for relation extraction. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 241–251 (2019)

    Google Scholar 

  8. Haffner, P., Tür, G., Wright, J.H.: Optimizing svms for complex call classification. In: 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP ’03, Hong Kong, pp. 632–635 (2003)

    Google Scholar 

  9. Hakkani-Tür, D., et al.: Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In: Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, pp. 715–719 (2016)

    Google Scholar 

  10. Huang, B., Carley, K.M.: Syntax-aware aspect level sentiment classification with graph attention networks. In: EMNLP-IJCNLP 2019, Hong Kong, China, pp. 5468–5476 (2019)

    Google Scholar 

  11. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR 2017, Toulon, France, Conference Track Proceedings (2017)

    Google Scholar 

  12. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, Texas, USA. pp. 2267–2273 (2015)

    Google Scholar 

  13. Liu, B., Lane, I.: Attention-based recurrent neural network models for joint intent detection and slot filling. In: Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, pp. 685–689 (2016)

    Google Scholar 

  14. Marcheggiani, D., Titov, I.: Encoding sentences with graph convolutional networks for semantic role labeling. In: EMNLP 2017, Copenhagen, Denmark, pp. 1506–1515 (2017)

    Google Scholar 

  15. Peng, B., Yao, K., Li, J., Wong, K.: Recurrent neural networks with external memory for spoken language understanding. In: NLPCC 2015, Nanchang, China, pp. 25–35 (2015)

    Google Scholar 

  16. Riloff, E., Chiang, D., Hockenmaier, J., Tsujii, J. (eds.): Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, (2018)

    Google Scholar 

  17. Sarikaya, R., Hinton, G.E., Ramabhadran, B.: Deep belief nets for natural language call-routing. In: ICASSP 2011, Prague Congress Center, Prague, Czech Republic. pp. 5680–5683 (2011)

    Google Scholar 

  18. Schlichtkrull, M.S., Cao, N.D., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. arXiv preprint arXiv:2010.00577 (2020)

  19. Sun, K., Zhang, R., Mensah, S., Mao, Y., Liu, X.: Aspect-level sentiment analysis via convolution over dependency tree. In: EMNLP-IJCNLP 2019, Hong Kong, China, pp. 5678–5687 (2019)

    Google Scholar 

  20. Tur, G.: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. John Wiley & Sons (2011)

    Google Scholar 

  21. Tür, G., Hakkani-Tür, D., Heck, L.P.: What is left to be understood in atis? In: 2010 IEEE Spoken Language Technology Workshop, Berkeley, California, USA, pp. 19–24 (2010)

    Google Scholar 

  22. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA. pp. 5998–6008 (2017)

    Google Scholar 

  23. Wang, Y., Shen, Y., Jin, H.: A bi-model based RNN semantic frame parsing model for intent detection and slot filling. In: NAACL-HLT, New Orleans, Louisiana, USA, Vol. 2, pp. 309–314 (2018)

    Google Scholar 

  24. Young, S.J., Gasic, M., Thomson, B., Williams, J.D.: Pomdp-based statistical spoken dialog systems: a review. Proc. IEEE 101(5), 1160–1179 (2013)

    Article  Google Scholar 

  25. Zhang, C., Li, Y., Du, N., Fan, W., Yu, P.S.: Joint slot filling and intent detection via capsule neural networks. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 5259–5267 (2019)

    Google Scholar 

  26. Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada. pp. 649–657 (2015)

    Google Scholar 

  27. Zhang, Y., Qi, P., Manning, C.D.: Graph convolution over pruned dependency trees improves relation extraction. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2205–2215 (2018). https://doi.org/10.18653/v1/d18-1244

  28. Zhao, L., Feng, Z.: Improving slot filling in spoken language understanding with joint pointer and attention. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, Vol. 2, pp. 426–431 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shimin Tao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tao, S. et al. (2021). Incorporating Complete Syntactical Knowledge for Spoken Language Understanding. In: Qin, B., Jin, Z., Wang, H., Pan, J., Liu, Y., An, B. (eds) Knowledge Graph and Semantic Computing: Knowledge Graph Empowers New Infrastructure Construction. CCKS 2021. Communications in Computer and Information Science, vol 1466. Springer, Singapore. https://doi.org/10.1007/978-981-16-6471-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-6471-7_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-6470-0

  • Online ISBN: 978-981-16-6471-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics