Abstract
In the modern age of information and analytics, natural language processing (NLP) is one of the most important technologies out there. Making sense of complex structures in language and deriving insights and actions from it is crucial from an artificial-intelligence perspective. In several domains, the importance of natural language processing is paramount and ever growing, as digital information in the form of language is ubiquitous. Applications of natural language processing include language translation, sentiment analysis, web search applications, customer service automation, text classification, topic detection from text, language modeling, and so forth. Traditional methods of natural language processing relied on the Bag of Word models, the Vector Space of Words model, and on-hand coded knowledge bases and ontologies. One of the key areas for natural language processing is the syntactic and semantic analysis of language. Syntactic analysis refers to how words are grouped and connected in a sentence. The main tasks in syntactic analysis are tagging parts of speech, detecting syntactic classes (such as verbs, nouns, noun phrases, etc.), and assembling sentences by constructing syntax trees. Semantics analysis refers to complex tasks such as finding synonyms, performing word-verb disambiguation, and so on.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Santanu Pattanayak
About this chapter
Cite this chapter
Pattanayak, S. (2017). Natural Language Processing Using Recurrent Neural Networks. In: Pro Deep Learning with TensorFlow. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3096-1_4
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3096-1_4
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3095-4
Online ISBN: 978-1-4842-3096-1
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)