Skip to main content

Web Corpus Construction

  • Book
  • © 2013


Part of the book series: Synthesis Lectures on Human Language Technologies (SLHLT)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 29.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (5 chapters)

About this book

The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several adavantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i.e., web crawling) and the usual cleanups including boilerplate removal and removal of duplicated content. Linguistic processing and problems with linguistic processing coming from the different kinds of noise in web corpora are also covered. Finally, the authors show how web corpora can be evaluated and compared to other corpora (such as traditionally compiled corpora). For additional material please visit the companion website: Table of Contents: Preface / Acknowledgments / Web Corpora / Data Collection / Post-Processing / Linguistic Processing / Corpus Evaluation and Comparison / Bibliography / Authors' Biographies

Authors and Affiliations

  • Freie Universität Berlin, Germany

    Roland Schäfer, Felix Bildhauer

About the authors

Roland Schäfer studied Theoretical and Indo-European Linguistics as well as Japanese Linguistics at Marburg and Bochum Universities. He completed his doctorate Arguments and Adjuncts at the Syntax-Semantics Interface in 2008 at Gottingen University, supervised by Gert Webelhuth and Regine Eckardt. Since then, he has been working as a research assistant at Freie Universitat Berlin, mainly doing corpus-based research on semantic and morpho-syntactic phenomena. In 2011, he started working on the COW ("Corpora from the Web") project with Felix Bildhauer. His teaching experience covers a wide range of topics including Theoretical and Corpus Linguistics, English and German Linguistics, as well as Computational Linguistics.

Bibliographic Information

Publish with us