Abstract
The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical approaches, where probabilities are assigned to the different readings of a word and the plausibility of grammatical constructions.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Note that most sentences have one and only one correct syntactic analysis, the same way they have also only one semantic meaning.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Petrov, S. (2011). Introduction. In: Coarse-to-Fine Natural Language Processing. Theory and Applications of Natural Language Processing. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22743-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-22743-1_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22742-4
Online ISBN: 978-3-642-22743-1
eBook Packages: Computer ScienceComputer Science (R0)