Abstract
In this paper, we study the task of generating long and coherent text. In the literature, Generative Adversarial Nets (GAN) based methods have been one of the mainstream approaches to generic text generation. We aim to improve two aspects of GAN-based methods in generic text generation, namely long sequence optimization and semantic coherence enhancement. For this purpose, we propose a novel Multi-Level Generative Adversarial Networks (MLGAN) for long and coherent text generation. Our approach explicitly models the text generation process at three different levels, namely paragraph-, sentence- and word-level generation. At the top two levels, we generate continuous paragraph vectors and sentence vectors as semantic sketches to plan the entire content. While, at the bottom level we generate discrete word tokens for realizing the sentences. Furthermore, we utilize a Conditional GAN architecture to enhance the inter-sentence coherence by injecting paragraph vectors for sentence vector generation. Extensive experiments results have demonstrated the effectiveness of the proposed model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T.B., et al.: Language models are few-shot learners. In: NeurIPS (2020)
Che, T., et al.: Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983 (2017)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)
Dong, L., Lapata, M.: Coarse-to-fine decoding for neural semantic parsing. In: ACL (2018)
Garbacea, C., Mei, Q.: Neural language generation: formulation, methods, and evaluation. arXiv preprint arXiv:2007.15780 (2020)
Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)
Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., Wang, J.: Long text generation via adversarial training with leaked information. In: AAAI (2018)
Hua, X., Wang, L.: Sentence-level content planning and style specification for neural text generation. In: EMNLP (2019)
Hyvönen, V., et al.: Fast nearest neighbor search through sparse random projections and voting. In: BigData (2016)
Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017)
Keskar, N.S., McCann, B., Varshney, L.R., Xiong, C., Socher, R.: Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019)
Kim, Y.: Convolutional neural networks for sentence classification. In: EMNLP (2014)
Lapata, M., Barzilay, R.: Automatic evaluation of text coherence: models and representations. In: IJCAI (2005)
Law, J., Charlton, J., Dockrell, J., Gascoigne, M., McKean, C., Theakston, A.: Early language development: needs, provision and intervention for pre-school children from socio-economically disadvantaged backgrounds. London Education Endowment Foundation (2017)
Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. In: EMNLP (2017)
Li, J., et al.: Textbox: a unified, modularized, and extensible framework for text generation. In: ACL (2021)
Li, J., Tang, T., Zhao, W.X., Wen, J.: Pretrained language models for text generation: a survey. In: IJCAI (2021)
Li, J., Zhao, W.X., Wei, Z., Yuan, N.J., Wen, J.R.: Knowledge-based review generation by coherence enhanced text planning. In: SIGIR (2021)
Li, J., Zhao, W.X., Wen, J., Song, Y.: Generating long and informative reviews with aspect-aware coarse-to-fine decoding. In: ACL (2019)
Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. (1932)
Lin, K., Li, D., He, X., Zhang, Z., Sun, M.T.: Adversarial ranking for language generation. In: NeurIPS (2017)
Liu, S., Zeng, S., Li, S.: Evaluating text coherence at sentence and paragraph levels. In: LREC (2020)
Lu, S., Zhu, Y., Zhang, W., Wang, J., Yu, Y.: Neural text generation: past, present and beyond. arXiv preprint arXiv:1803.07133 (2018)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. JMLR (2008)
Mihalcea, R., Tarau, P.: Textrank: bringing order into text. In: EMNLP (2004)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NeurIPS (2013)
Miyato, T., Koyama, M.: cGANs with projection discriminator. In: ICLR (2018)
Moryossef, A., Goldberg, Y., Dagan, I.: Step-by-step: separating planning from realization in neural data-to-text generation. In: NAACL (2019)
Nie, W., Narodytska, N., Patel, A.: Relgan: relational generative adversarial networks for text generation. In: ICLR (2018)
Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2017)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL (2002)
Pullum, G.K.: The land of the free and the elements of style. English Today (2010)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)
Rose, S., Engel, D., Cramer, N., Cowley, W.: Automatic keyword extraction from individual documents. Text Mining: Appl. Theory (2010)
Serban, I.V., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: AAAI (2016)
Shen, D., et al.: Towards generating long and coherent text with multi-level latent variable models. In: ACL (2019)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)
Yang, Z., Chen, W., Wang, F., Xu, B.: Improving neural machine translation with conditional sequence generative adversarial nets. In: NAACL-HLT (2018)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: NeurIPS (2019)
Yu, L., Zhang, W., Wang, J., Yu, Y.: Seqgan: sequence generative adversarial nets with policy gradient. In: AAAI (2017)
Zhang, H., Xu, T., Li, H.: Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV (2017)
Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: I have a dog, do you have pets too? In: ACL (2018)
Acknowledgement
This work was partially supported by the National Natural Science Foundation of China under Grant No. 61872369 and 61832017, Beijing Academy of Artificial Intelligence (BAAI) under Grant No. BAAI2020ZJ0301, Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Tang, T., Li, J., Zhao, W.X., Wen, JR. (2021). Generating Long and Coherent Text with Multi-Level Generative Adversarial Networks. In: U, L.H., Spaniol, M., Sakurai, Y., Chen, J. (eds) Web and Big Data. APWeb-WAIM 2021. Lecture Notes in Computer Science(), vol 12859. Springer, Cham. https://doi.org/10.1007/978-3-030-85899-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-85899-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85898-8
Online ISBN: 978-3-030-85899-5
eBook Packages: Computer ScienceComputer Science (R0)