Abstract
Spoken Language Processing (SLU) is important in task-oriented dialog systems. Intent detection and slot filling are two significant tasks of SLU. State-of-the-art methods for SLU jointly solve these two tasks in an end-to-end fashion using pre-trained language models like BERT. However, existing methods ignore the syntax knowledge and long-range word dependencies, which are essential supplements for semantic models. In this paper, we utilize the Graph Convolutional Networks (GCNs) and dependency tree to incorporate the syntactical knowledge. Meanwhile, we propose a novel gate mechanism to model the label of the dependency arcs. Therefore, the labels and geometric connection of dependency tree are both encoded. The proposed method can adaptively attach a weight on each dependency arc based on dependency types and word contexts, which avoids encoding redundant features. Extensive experimental results show that our model outperforms strong baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Chen, Q., Zhuo, Z., Wang, W.: BERT for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909 (2019)
Coucke, A., et al.: Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190 (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Niu, P., Chen, Z., Song, M.: A novel bi-directional interrelated model for joint intent detection and slot filling. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 5467–5471 (2019)
Goo, C., et al.: Slot-gated modeling for joint slot filling and intent prediction. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, Vol. 2, pp. 753–757 (2018)
Guo, D., Tür, G., Yih, W., Zweig, G.: Joint semantic utterance classification and slot filling with recursive neural networks. In: 2014 IEEE Spoken Language Technology Workshop, SLT 2014, South Lake Tahoe, NV, USA, pp. 554–559 (2014)
Guo, Z., Zhang, Y., Lu, W.: Attention guided graph convolutional networks for relation extraction. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 241–251 (2019)
Haffner, P., Tür, G., Wright, J.H.: Optimizing svms for complex call classification. In: 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP ’03, Hong Kong, pp. 632–635 (2003)
Hakkani-Tür, D., et al.: Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM. In: Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, pp. 715–719 (2016)
Huang, B., Carley, K.M.: Syntax-aware aspect level sentiment classification with graph attention networks. In: EMNLP-IJCNLP 2019, Hong Kong, China, pp. 5468–5476 (2019)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR 2017, Toulon, France, Conference Track Proceedings (2017)
Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, Texas, USA. pp. 2267–2273 (2015)
Liu, B., Lane, I.: Attention-based recurrent neural network models for joint intent detection and slot filling. In: Interspeech 2016, 17th Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, pp. 685–689 (2016)
Marcheggiani, D., Titov, I.: Encoding sentences with graph convolutional networks for semantic role labeling. In: EMNLP 2017, Copenhagen, Denmark, pp. 1506–1515 (2017)
Peng, B., Yao, K., Li, J., Wong, K.: Recurrent neural networks with external memory for spoken language understanding. In: NLPCC 2015, Nanchang, China, pp. 25–35 (2015)
Riloff, E., Chiang, D., Hockenmaier, J., Tsujii, J. (eds.): Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, (2018)
Sarikaya, R., Hinton, G.E., Ramabhadran, B.: Deep belief nets for natural language call-routing. In: ICASSP 2011, Prague Congress Center, Prague, Czech Republic. pp. 5680–5683 (2011)
Schlichtkrull, M.S., Cao, N.D., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. arXiv preprint arXiv:2010.00577 (2020)
Sun, K., Zhang, R., Mensah, S., Mao, Y., Liu, X.: Aspect-level sentiment analysis via convolution over dependency tree. In: EMNLP-IJCNLP 2019, Hong Kong, China, pp. 5678–5687 (2019)
Tur, G.: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. John Wiley & Sons (2011)
Tür, G., Hakkani-Tür, D., Heck, L.P.: What is left to be understood in atis? In: 2010 IEEE Spoken Language Technology Workshop, Berkeley, California, USA, pp. 19–24 (2010)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA. pp. 5998–6008 (2017)
Wang, Y., Shen, Y., Jin, H.: A bi-model based RNN semantic frame parsing model for intent detection and slot filling. In: NAACL-HLT, New Orleans, Louisiana, USA, Vol. 2, pp. 309–314 (2018)
Young, S.J., Gasic, M., Thomson, B., Williams, J.D.: Pomdp-based statistical spoken dialog systems: a review. Proc. IEEE 101(5), 1160–1179 (2013)
Zhang, C., Li, Y., Du, N., Fan, W., Yu, P.S.: Joint slot filling and intent detection via capsule neural networks. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, Vol. 1, pp. 5259–5267 (2019)
Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada. pp. 649–657 (2015)
Zhang, Y., Qi, P., Manning, C.D.: Graph convolution over pruned dependency trees improves relation extraction. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 2205–2215 (2018). https://doi.org/10.18653/v1/d18-1244
Zhao, L., Feng, Z.: Improving slot filling in spoken language understanding with joint pointer and attention. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, Vol. 2, pp. 426–431 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Tao, S. et al. (2021). Incorporating Complete Syntactical Knowledge for Spoken Language Understanding. In: Qin, B., Jin, Z., Wang, H., Pan, J., Liu, Y., An, B. (eds) Knowledge Graph and Semantic Computing: Knowledge Graph Empowers New Infrastructure Construction. CCKS 2021. Communications in Computer and Information Science, vol 1466. Springer, Singapore. https://doi.org/10.1007/978-981-16-6471-7_11
Download citation
DOI: https://doi.org/10.1007/978-981-16-6471-7_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-6470-0
Online ISBN: 978-981-16-6471-7
eBook Packages: Computer ScienceComputer Science (R0)