Another Look at the Data Sparsity Problem

  • Ben Allison
  • David Guthrie
  • Louise Guthrie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4188)


Performance on a statistical language processing task relies upon accurate information being found in a corpus. However, it is known (and this paper will confirm) that many perfectly valid word sequences do not appear in training corpora. The percentage of n-grams in a test document which are seen in a training corpus is defined as n-gram coverage, and work in the speech processing community [7] has shown that there is a correlation between n-gram coverage and word error rate (WER) on a speech recognition task. Other work (e.g. [1]) has shown that increasing training data consistently improves performance of a language processing task. This paper extends that work by examining n-gram coverage for far larger corpora, considering a range of document types which vary in their similarity to the training corpora, and experimenting with a broader range of pruning techniques. The paper shows that large portions of language will not be represented within even very large corpora. It confirms that more data is always better, but how much better is dependent upon a range of factors: the source of that additional data, the source of the test documents, and how the language model is pruned to account for sampling errors and make computation reasonable.


Test Document Document Type Training Corpus Statistical Machine Translation Word Sequence 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Banko, M., Brill, E.: Mitigating the Paucity of Data Problem. In: Proceedings of the Conference on Human Language Technology (2001)Google Scholar
  2. 2.
    Chen, S., Goodman, J.: An empirical study of smoothing techniques for language modeling. Technical report TR-10-98, Harvard University (1998)Google Scholar
  3. 3.
    Jelinek, F.: Up from trigrams! In: Proceedings Eurospeech 1991 (1991)Google Scholar
  4. 4.
    Manning, C., Schütze, H.: Foundations of Statistical Natural Language Processing. MIT Press, Cambridge (1999)MATHGoogle Scholar
  5. 5.
    Moore, R.: There’s No Data Like More Data (But When Will Enough Be Enough?). In: Proceedings of IEEE International Workshop on Intelligent Signal Processing (2001)Google Scholar
  6. 6.
    Powell, W.: The Anarchist’s Cookbook. Ozark Pr Llc. (1970)Google Scholar
  7. 7.
    Rosenfeld, R.: Optimizing Lexical and N-gram Coverage Via Judicious Use of Linguistic Data. In: Proceedings Eurospeech 1995 (1995)Google Scholar
  8. 8.
    Klimt, B., Yang, Y.: Introducing the Enron Email Corpus. Carnegie Mellon University (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ben Allison
    • 1
  • David Guthrie
    • 1
  • Louise Guthrie
    • 1
  1. 1.Regent CourtUniversity of SheffieldSheffieldUK

Personalised recommendations