Advertisement

Learning to Detect Verbose Expressions in Spoken Texts

  • Qingbin Liu
  • Shizhu He
  • Kang Liu
  • Shengping Liu
  • Jun Zhao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11221)

Abstract

The analysis and understanding of spoken texts is an important task in artificial intelligence and natural language processing. However, there are many verbose expressions (such as mantras, nonsense, modal particle, etc.) in spoken texts, which brings great challenges to subsequent tasks. This paper devote to detect verbose expressions in spoken texts. Considering the correlation of verbose words/characters in spoken texts, we adapt sequence models to detect them with an end-to-end manner. Moreover, we propose a model with the long-short term memory (LSTM) and modified restrict attention (MRA) mechanism which are able to utilize the mutual influence between long-distance and local words in sentences. In addition, we propose a compare mechanism to model the repetitive verbose expressions. The experimental result shows that compared with the rule-based and direct classification methods, our proposed model increases F1 measure by 54.08% and 18.91%.

Keywords

Spoken texts Verbose expressions Text transformation Modified restricted attention mechanism Compare mechanism 

Notes

Acknowledgments

The research work is supported by the National Key Research and Development Program of China under Grant No.2017YFB1002101, the Natural Science Foundation of China (No.61533018 and No.61702512), and the independent research project of National Laboratory of Pattern Recognition.

References

  1. 1.
    Adda-Decker, M., Adda, G., Lamel, L.: Investigating text normalization and pronunciation variants for German broadcast transcription. In: Sixth International Conference on Spoken Language Processing (2000)Google Scholar
  2. 2.
    Aw, A., Zhang, M., Xiao, J., Su, J.: A phrase-based statistical model for SMS text normalization. In: Proceedings of the COLING/ACL on Main Conference Poster Sessions, pp. 33–40. Association for Computational Linguistics (2006)Google Scholar
  3. 3.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2014)Google Scholar
  4. 4.
    Beaufort, R., Roekhaut, S., Cougnon, L.A., Fairon, C.: A hybrid rule/model-based finite-state framework for normalizing SMS messages. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 770–779. Association for Computational Linguistics, Uppsala, July 2010Google Scholar
  5. 5.
    Cook, P., Stevenson, S.: An unsupervised model for text message normalization. In: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pp. 71–78. Association for Computational Linguistics (2009)Google Scholar
  6. 6.
    Dong, L., Mallinson, J., Reddy, S., Lapata, M.: Learning to paraphrase for question answering. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 875–886. Association for Computational Linguistics, Copenhagen, September 2017Google Scholar
  7. 7.
    Hassan, H., Menezes, A.: Social text normalization using contextual graph random walks. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1577–1586. Association for Computational Linguistics, Sofia, August 2013Google Scholar
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. CoRR abs/1508.01991 (2015)Google Scholar
  10. 10.
    Khashabi, D., Khot, T., Sabharwal, A., Roth, D.: Learning what is essential in questions. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pp. 80–89. Association for Computational Linguistics, Vancouver, August 2017Google Scholar
  11. 11.
    Liu, F., Weng, F., Wang, B., Liu, Y.: Insertion, deletion, or substitution? Normalizing text messages without pre-categorization nor supervision. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 71–76 (2011)Google Scholar
  12. 12.
    Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR abs/1605.09090 (2016)Google Scholar
  13. 13.
    Qin, K., Wang, L., Kim, J.: Joint modeling of content and discourse relations in dialogues. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 974–984. Association for Computational Linguistics, Vancouver, July 2017Google Scholar
  14. 14.
    Shaw, P., Uszkoreit, J., Vaswani, A.: Self-attention with relative position representations. CoRR abs/1803.02155 (2018)Google Scholar
  15. 15.
    Sonmez, C., Ozgur, A.: A graph-based approach for contextual text normalization. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 313–324. Association for Computational Linguistics, Doha, October 2014Google Scholar
  16. 16.
    Sproat, R., Jaitly, N.: RNN approaches to text normalization: A challenge. CoRR abs/1611.00068 (2016)Google Scholar
  17. 17.
    Sridhar, V.K.R.: Unsupervised text normalization using distributed representations of words and phrases. In: NAACL HLT 2015, pp. 8–16 (2015)Google Scholar
  18. 18.
    Sun, W., Sui, Z., Wang, M., Wang, X.: Chinese semantic role labeling with shallow parsing. In: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3. EMNLP 2009, vol. 3, pp. 1475–1483. Association for Computational Linguistics, Stroudsburg (2009)Google Scholar
  19. 19.
    Vaswani, A., et al.: Attention is all you need. CoRR abs/1706.03762 (2017)Google Scholar
  20. 20.
    Xia, Y., Wong, K.F., Li, W.: A phonetic-based approach to Chinese chat text normalization. In: Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pp. 993–1000. Association for Computational Linguistics, Sydney, July 2006Google Scholar
  21. 21.
    Yang, Y., Eisenstein, J.: A log-linear model for unsupervised text normalization. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 61–72. Association for Computational Linguistics, Seattle, October 2013Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Qingbin Liu
    • 1
    • 2
  • Shizhu He
    • 1
  • Kang Liu
    • 1
  • Shengping Liu
    • 3
  • Jun Zhao
    • 1
    • 2
  1. 1.National Laboratory of Pattern Recognition, Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina
  3. 3.Beijing Unisound Information TechnologyBeijingChina

Personalised recommendations