Advertisement

Text synthesis from keywords: a comparison of recurrent-neural-network-based architectures and hybrid approaches

  • Nikolaos KolokasEmail author
  • Anastasios Drosou
  • Dimitrios Tzovaras
Emerging Trends of Applied Neural Computation - E_TRAINCO
  • 71 Downloads

Abstract

This paper concerns an application of recurrent neural networks to text synthesis in the word level, with the help of keywords. First, a Parts Of Speech tagging library is employed to extract verbs and nouns from the texts used in our work, a part of which are then considered, after automatic eliminations, as the aforementioned keywords. Our ultimate aim is to train a recurrent neural network to map the keyword sequence of a text to the entire text. Successive reformulations of the keyword and full-text word sequences are performed, so that they can serve as the input and target of the network as efficiently as possible. The predicted texts are understandable enough, and the model performance depends on the problem difficulty, determined by the percentage of full-text words that are considered as keywords, that ranges from 1/3 to 1/2 approximately, the training memory cost, mainly affected by the network architecture, as well as the similarity between different texts, which determines the best architecture.

Keywords

Deep machine learning Sequence modeling Natural language processing Text mining 

Notes

Acknowledgements

This work has been partially supported by the European Commission through project Scan4Reco funded by the European Union H2020 programme under Grant Agreement No. 665091. The opinions expressed in this paper are those of the authors and do not necessarily reflect the views of the European Commission.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, CambridgezbMATHGoogle Scholar
  2. 2.
    Goldberg Y (2016) A primer on neural network models for natural language processing. J Artif Intell Res 57:345–420MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Sutskever I, Martens J, Hinton J (2011) Generating text with recurrent neural networks. In: Proceedings of the 28th international conference on machine learning—Bellevue, WA, USA, pp 1017-1024Google Scholar
  4. 4.
    Kolokas N, Drosou A, Tzovaras D (2018) Keywords-to-text synthesis using recurrent neural network. In: Iliadis L, Maglogiannis I, Plagianakos V (eds) Artificial intelligence applications and innovations—AIAI 2018—IFIP advances in information and communication technology. Springer, Cham. vol 519, pp 85–96Google Scholar
  5. 5.
    Shah P, Perez-Iratxeta C, Andrade M (2003) Information extraction from full text scientific articles: where are the keywords? BMC Bioinf BioMed Central 4:20CrossRefGoogle Scholar
  6. 6.
    Andrade MA, Valencia A (1998) Automatic extraction of keywords from scientific text: application to the knowledge domain of protein families. Bioinformatics 14(7):600–607CrossRefGoogle Scholar
  7. 7.
    Serra PMUX (2010) A look into the past: analysis of trends and topics in proceedings of sound and music computing conference. In: Sound and music computing conferenceGoogle Scholar
  8. 8.
    HaCohen-Kerner Y (2003) Automatic extraction of keywords from abstracts. In: International conference on knowledge-based and intelligent information and engineering systems, Springer, Berlin. vol 2773, pp 843–849Google Scholar
  9. 9.
    Litvak M, Last M (2008) Graph-based keyword extraction for single-document summarization. In: Proceedings of the workshop on multi-source multilingual information extraction and summarizationGoogle Scholar
  10. 10.
    Al-Hashemi R (2010) Text summarization extraction system (TSES) using extracted keywords. Int Arab J Technol 1(4):164–168Google Scholar
  11. 11.
    Sneiders E (1999) Automated FAQ answering: continued experience with shallow language understanding. In: Association for the advancement of artificial intelligence, pp 97–107Google Scholar
  12. 12.
    Smit B (2002) Atlas. ti for quality in qualitative research: a CAQDAS project. Perspect Educ 20(3):65–76Google Scholar
  13. 13.
    Al Kadri HMF, Al Moamary MS, van der Vleuten C (2009) Students’ and teachers’ perceptions of clinical assessment program: a qualitative study in a PBL curriculum. BMC Res Notes 2:263CrossRefGoogle Scholar
  14. 14.
    Kasper R (1989) A flexible interface for linking applications to Penman’s sentence generator. In: Proceeding HLT ’89—Proceedings of the workshop on speech and natural language, pp 153–158Google Scholar
  15. 15.
    Bateman JA (1997) Sentence generation and systemic grammar: an introduction. Iwanami lecture series: language sciences, p 8Google Scholar
  16. 16.
    Feiner S, McKeown K (1991) Automating the generation of coordinated multimedia explanations. Computer 24(10):33–41CrossRefGoogle Scholar
  17. 17.
    Bernauer J, Gumrich K, Kutz S, Lindner P, Pretschner DP (1991) An interactive report generator for bone scan studies. In: Proceedings of the annual symposium on computer application in medical care, pp 858–860Google Scholar
  18. 18.
    Carbonell J (1970) AI in CAI: an artificial-intelligence approach to computer-assisted instruction. IEEE Trans Man Mach Syst 11(4):190–202CrossRefGoogle Scholar
  19. 19.
    Levy S (2016) The brain is here—and it’s already inside your phone, https://www.wired.com/2016/08/an-exclusive-look-at-how-ai-and-machine-learning-work-at-apple/
  20. 20.
    Kahn J (2017) Apple engineers share behind-the-scenes evolution of Siri & more on Apple Machine Learning Journal, https://9to5mac.com/2017/08/23/evolution-siri-machine-learning-journal/
  21. 21.
    Vogels W (2016) Bringing the Magic of Amazon AI and Alexa to Apps on AWS, https://www.allthingsdistributed.com/2016/11/amazon-ai-and-alexa-for-all-aws-apps.html
  22. 22.
    Brownlee J (2017) How to develop an encoder-decoder model for sequence-to-sequence prediction in Keras, https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/
  23. 23.
    Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: ICLRGoogle Scholar
  24. 24.
    Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: NIPSGoogle Scholar
  25. 25.
    Jean S, Cho K, Memisevic R, Bengio Y (2014) On using very large target vocabulary for neural machine translation, https://arxiv.org/pdf/1412.2007v2.pdf
  26. 26.
    Kalchbrenner N, Blunsom P (2013) Recurrent continuous translation models. In: EMNLPGoogle Scholar
  27. 27.
    Zhang B, Xiong D, Su J, Duan H (2017) A context-aware recurrent encoder for neural machine translation . IEEE/ACM Trans Audio Speech Lang Process 25(12):2424–2432CrossRefGoogle Scholar
  28. 28.
    Su J, Zeng J, Xiong D, Liu Y, Wang M, Xie J (2018) A hierarchy-to-sequence attentional neural machine translation model. IEEE/ACM Trans Audio Speech Lang Process 26(3):623–632CrossRefGoogle Scholar
  29. 29.
    Li Q, Wong DF, Chao LS, Zhu M, Xiao T (2018) Linguistic knowledge-aware neural machine translation. IEEE/ACM Trans Audio Speech Lang Process 26(12):2341–2354CrossRefGoogle Scholar
  30. 30.
    Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inf 13(2):616–624CrossRefGoogle Scholar
  31. 31.
    Dethlefs N (2017) Domain transfer for deep natural language generation from abstract meaning representations. IEEE Comput Intell Mag 12(3):18–28CrossRefGoogle Scholar
  32. 32.
    Weston J, Chopra S, Bordes A (2014) Memory networks. In: ICLRGoogle Scholar
  33. 33.
    Kumar A, Irsoy O, Su J, Bradbury J, English R, Pierce B, Ondruska P, Iyyer M, Gulrajani I, Socher R (2015) Ask me anything: dynamic memory networks for natural language processing. In: International conference on machine learningGoogle Scholar
  34. 34.
    Zaman MMA, Mishu SZ (2017) Convolutional recurrent neural network for question answering. In: 3rd International conference on electrical information and communication technology (EICT)Google Scholar
  35. 35.
    Tan C, Wei F, Zhou Q, Yang N, Du B, Lv W, Zhou M (2018) Context-aware answer sentence selection with hierarchical gated recurrent neural networks. IEEE/ACM Trans Audio Speech Lang Process 26(3):540–549CrossRefGoogle Scholar
  36. 36.
    Wu F, Duan X, Xiao J, Zhao Z, Tang S, Zhang Y, Zhuang Y (2017) Temporal interaction and causal influence in community-based question answering. IEEE Trans Knowl Data Eng 29(10):2304–2317CrossRefGoogle Scholar
  37. 37.
    Graves A, Mohamed A, Hinton G (2013) Speech recognition with deep recurrent neural networks. In: ICASSP, pp 6645–6649Google Scholar
  38. 38.
    Pascanu R, Gülçehre Ç, Cho K, Bengio Y (2014) How to construct deep recurrent neural networks. In: ICLRGoogle Scholar
  39. 39.
    Chung J, Gulcehre C, Cho, K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. NIPS deep learning workshopGoogle Scholar
  40. 40.
    Del-Agua MÁ, Giménez A, Sanchis A, Civera J, Juan A (2018) Speaker-adapted confidence measures for ASR using deep bidirectional recurrent neural networks. IEEE/ACM Trans Audio Speech Lang Process 26(7):1198–1206CrossRefGoogle Scholar
  41. 41.
    Enarvi S, Smit P, Virpioja S, Kurimo M (2017) Automatic speech recognition with very large conversational finnish and estonian vocabularies. IEEE/ACM Trans Audio Speech Lang Process 25(11):2085–2097CrossRefGoogle Scholar
  42. 42.
    Kim M, Cao B, Mau T, Wang J (2017) Speaker-independent silent speech recognition from flesh-point articulatory movements using an LSTM neural network. IEEE/ACM Trans Audio Speech Lang Process 25(12):2323–2336CrossRefGoogle Scholar
  43. 43.
    Ruan Y-P, Chen Q, Ling Z-H (2017) A sequential neural encoder with latent structured description for modeling sentences. IEEE/ACM Trans Audio Speech Lang Process 26(2):231–242CrossRefGoogle Scholar
  44. 44.
    Baktha K, Tripathy BK (2017) Investigation of recurrent neural networks in the field of sentiment analysis. In: International conference on communication and signal processing (ICCSP)Google Scholar
  45. 45.
    Hassan A, Mahmood A (2018) Convolutional recurrent deep learning model for sentence classification. IEEE Access 6:13949–13957CrossRefGoogle Scholar
  46. 46.
    Wu D, Chi M (2017) Long short-term memory with quadratic connections in recursive neural networks for representing compositional semantics. IEEE Access 5:16077–16083CrossRefGoogle Scholar
  47. 47.
    Güngör O, Üsküdarli S, Güngör T (2018) Recurrent neural networks for Turkish named entity recognition. In: 26th signal processing and communications applications conference (SIU)Google Scholar
  48. 48.
    Zhang X, Li X, An J, Gao L, Hou B, Li C (2017) Natural language description of remote sensing images based on deep learning. In: IEEE international geoscience and remote sensing symposium (IGARSS)Google Scholar
  49. 49.
    Park CC, Kim Y, Kim G (2018) Retrieval of sentence sequences for an image stream via coherence recurrent convolutional networks. IEEE Trans Pattern Anal Mach Intell 40(4):945–957CrossRefGoogle Scholar
  50. 50.
    Karpathy A, Fei-Fei L (2017) Deep visual-semantic alignments for generating image descriptions. IEEE Trans Pattern Anal Mach Intell 39(4):664–676CrossRefGoogle Scholar
  51. 51.
    Li L, Tang S, Zhang Y, Deng L, Tian Q (2018) GLA: global-local attention for image description. IEEE Trans Multimedia 20(3):726–737CrossRefGoogle Scholar
  52. 52.
    Yang L, Hu H (2017) TVPRNN for image caption generation. Electron Lett 53(22):1471–1473CrossRefGoogle Scholar
  53. 53.
    Vinyals O, Toshev A, Bengio S, Erhan D (2017) Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans Pattern Anal Mach Intell 39(4):652–663CrossRefGoogle Scholar
  54. 54.
    Cascianelli S, Costante G, Ciarfuglia TA, Valigi P, Fravolini ML (2018) Full-GRU natural language video description for service robotics applications. IEEE Robot Autom Lett 3(2):841–848CrossRefGoogle Scholar
  55. 55.
    Peng Y, Qi J, Yuan Y (2018) Modality-specific cross-modal similarity measurement with recurrent attention network. IEEE Trans Image Process 27(11):5585–5599MathSciNetCrossRefGoogle Scholar
  56. 56.
    Zheng H-T, Wang W, Chen W, Sangaiah AK (2017) Automatic generation of news comments based on gated attention neural networks. IEEE Access 6:702–710CrossRefGoogle Scholar
  57. 57.
    Zhang R, Meng F, Zhou Y, Liu B (2018) Relation classification via recurrent neural network with attention and tensor layers. Big Data Min Anal 1(3):234–244CrossRefGoogle Scholar
  58. 58.
    Tang Z, Wang D, Chen Y, Li L, Abel A (2018) Phonetic temporal neural model for language identification. IEEE/ACM Trans Audio Speech Lang Process 26(1):134–144CrossRefGoogle Scholar
  59. 59.
    Zheng K, Yan WQ, Nand P (2018) Video dynamics detection using deep neural networks. IEEE Trans Emerg Top Comput Intell 2(3):224–234CrossRefGoogle Scholar
  60. 60.
    Donahue J, Hendricks LA, Rohrbach M, Venugopalan S, Guadarrama S, Saenko K, Darrell T (2017) Long-term recurrent convolutional networks for visual recognition and description. IEEE Trans Pattern Anal Mach Intell 39(4):677–691CrossRefGoogle Scholar
  61. 61.
    Yang Y, Zhou J, Ai J, Bin Y, Hanjalic A, Shen HT, Ji Y (2018) Video captioning by adversarial LSTM. IEEE Trans Image Process 27(11):5600–5611MathSciNetCrossRefGoogle Scholar
  62. 62.
    Liu Z-C, Ling Z-H, Dai L-R (2018) Statistical parametric speech synthesis using generalized distillation framework. IEEE Signal Process Lett 25(5):695–699CrossRefGoogle Scholar
  63. 63.
    Gonzalez JA, Cheah LA, Gomez AM, Green PD, Gilbert JM, Ell SR, Moore RK, Holdsworth E (2017) Direct speech reconstruction from articulatory sensor data by machine learning. IEEE/ACM Trans Audio Speech Lang Process 25(12):2362–2374CrossRefGoogle Scholar
  64. 64.
    Feng X, Huang L, Qin B, Lin Y, Ji H, Liu T (2017) Multi-level cross-lingual attentive neural architecture for low resource name tagging. Tsinghua Sci Technol 22(6):633–645CrossRefGoogle Scholar
  65. 65.
    Abroyan N (2017) Convolutional and recurrent neural networks for real-time data classification. In: Seventh international conference on innovative computing technology (INTECH)Google Scholar
  66. 66.
    Ling Z-H, Ai Y, Gu Y, Dai L-R (2018) Waveform modeling and generation using hierarchical recurrent neural networks for speech bandwidth extension. IEEE/ACM Trans Audio Speech Lang Process 26(5):883–894CrossRefGoogle Scholar
  67. 67.
    Wang W, Sheng Y, Wang J, Zeng X, Ye X, Huang Y, Zhu M (2017) HAST-IDS: learning hierarchical spatial-temporal features using deep neural networks to improve intrusion detection. IEEE Access 6:1792–1806CrossRefGoogle Scholar
  68. 68.
    Naseer S, Saleem Y, Khalid S, Bashir MK, Han J, Iqbal MM, Han K (2018) Enhanced network anomaly detection based on deep neural networks. IEEE Access 6:48231–48246CrossRefGoogle Scholar
  69. 69.
    Bengio Y, Ducharme R, Vincent P (2001) A neural probabilistic language model. In: Leen TK, Dietterich TG, Tresp V (eds) NIPS’2000, MIT Press, Cambridge, pp 932–938Google Scholar
  70. 70.
    Wang Zhish, Lin J, Wang Zhongf (2018) Hardware-oriented compression of long short-term memory for efficient inference. IEEE Signal Process Lett 25(7):984–988CrossRefGoogle Scholar
  71. 71.
    Chen C, Ding H, Peng H, Zhu H, Wang Y, Shi C-JR (2018) OCEAN: an on-chip incremental-learning enhanced artificial neural network processor with multiple gated-recurrent-unit accelerators. IEEE J Emerg Sel Top Circuits Syst 8(3):519–530CrossRefGoogle Scholar
  72. 72.
    Goldberg Y, Hirst G (2017) Neural network methods for natural language processing. Morgan & Claypool, San RafaelCrossRefGoogle Scholar
  73. 73.
  74. 74.
    Natural Language Toolkit (NLTK), http://www.nltk.org
  75. 75.
    Brownlee J (2017) How to develop word embeddings in python with Gensim, https://machinelearningmastery.com/develop-word-embeddings-python-gensim/
  76. 76.
    Atabay D., pyrenn: A recurrent neural network toolbox for Python and Matlab, Institute for Energy Economy and Application Technology - Technische Universität, München, http://pyrenn.readthedocs.io/en/latest/
  77. 77.
    Keras Documentation - Models - Sequential, https://keras.io/models/sequential/
  78. 78.

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Center for Research and Technology HellasThermi, ThessalonikiGreece

Personalised recommendations