Abstract
As people use language differently when speaking compared to writing, transcriptions generated by automatic speech recognition systems can be difficult to read. While techniques exist to simplify spoken language into written language at the sentence level, research on simplifying spoken language has various spoken language issues at the document level is limited. Document-level spoken-to-written simplification faces challenges posed by cross-sentence transformations and the long dependencies of spoken documents. This paper proposes a new method called G-DSWS (Graph attention networks for Document-level Spoken-to-Written Simplification) using graph attention networks to model the structure of a document explicitly. G-DSWS utilizes structural information from the document to improve the document modeling capability of the encoder-decoder architecture. Experiments on the internal and publicly available datasets demonstrate the effectiveness of the proposed model. And the human evaluation and case study show that G-DSWS indeed improves spoken Chinese documents’ readability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chafe, W., Tannen, D.: The relation between written and spoken language. Annu. Rev. Anthropol. 16(1), 383–407 (1987)
Dong, Q., Wang, F., Yang, Z., Chen, W., Xu, S., Xu, B.: Adapting translation models for transcript disfluency detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6351–6358 (2019)
Fang, Y., Sun, S., Gan, Z., Pillai, R., Wang, S., Liu, J.: Hierarchical graph network for multi-hop question answering. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 8823–8838 (2020)
Hrinchuk, O., Popova, M., Ginsburg, B.: Correction of automatic speech recognition with transformer sequence-to-sequence model. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7074–7078. IEEE (2020)
Ihori, M., Makishima, N., Tanaka, T., Takashima, A., Orihashi, S., Masumura, R.: MAPGN: masked pointer-generator network for sequence-to-sequence pre-training. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7563–7567. IEEE (2021)
Ihori, M., Makishima, N., Tanaka, T., Takashima, A., Orihashi, S., Masumura, R.: Zero-shot joint modeling of multiple spoken-text-style conversion tasks using switching tokens. arXiv preprint arXiv:2106.12131 (2021)
Ihori, M., Masumura, R., Makishima, N., Tanaka, T., Takashima, A., Orihashi, S.: Memory attentive fusion: external language model integration for transformer-based sequence-to-sequence model. In: Proceedings of the 13th International Conference on Natural Language Generation, pp. 1–6 (2020)
Ihori, M., Takashima, A., Masumura, R.: Large-context pointer-generator networks for spoken-to-written style conversion. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8189–8193. IEEE (2020)
Ihori, M., Takashima, A., Masumura, R.: Parallel corpus for Japanese spoken-to-written style conversion. In: Proceedings of the 12th Language Resources and Evaluation Conference, pp. 6346–6353 (2020)
Kenton, J.D.M.W.C., Toutanova, L.K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)
Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871–7880 (2020)
Liao, J., et al.: Improving readability for automatic speech recognition transcription. arXiv preprint arXiv:2004.04438 (2020)
Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3730–3740 (2019)
Ott, M., et al.: fairseq: a fast, extensible toolkit for sequence modeling. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 48–53 (2019)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)
Pusateri, E., Ambati, B.R., Brooks, E., Platek, O., McAllaster, D., Nagesha, V.: A mostly data-driven approach to inverse text normalization. In: INTERSPEECH, Stockholm, pp. 2784–2788 (2017)
Sun, R., Jin, H., Wan, X.: Document-level text simplification: dataset, criteria and baseline. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 7997–8013 (2021)
Sun, R., Lin, Z., Wan, X.: On the helpfulness of document context to sentence simplification. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 1411–1423 (2020)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, vol. 27 (2014)
Tilk, O., Alumäe, T.: Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In: Interspeech, pp. 3047–3051 (2016)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Wang, S., Che, W., Liu, Q., Qin, P., Liu, T., Wang, W.Y.: Multi-task self-supervised learning for disfluency detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9193–9200 (2020)
Xu, M., Li, L., Wong, D.F., Liu, Q., Chao, L.S.: Document graph for neural machine translation. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 8435–8448 (2021)
Xu, W., Napoles, C., Pavlick, E., Chen, Q., Callison-Burch, C.: Optimizing statistical machine translation for text simplification. Trans. Assoc. Comput. Linguist. 4, 401–415 (2016)
Zheng, B., et al.: Document modeling with graph attention networks for multi-grained machine reading comprehension. In: ACL (2020)
Acknowledgements
This work was supported by the National Key R &D Program of China under Grant No. 2020AAA0108600 and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDC08020100.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhao, Y., Wu, H., Xu, S., Xu, B. (2024). Make Spoken Document Readable: Leveraging Graph Attention Networks for Chinese Document-Level Spoken-to-Written Simplification. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1966. Springer, Singapore. https://doi.org/10.1007/978-981-99-8148-9_32
Download citation
DOI: https://doi.org/10.1007/978-981-99-8148-9_32
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8147-2
Online ISBN: 978-981-99-8148-9
eBook Packages: Computer ScienceComputer Science (R0)