A Method of Extracting Related Words Using Standardized Mutual Information

  • Tomohiko Sugimachi
  • Akira Ishino
  • Masayuki Takeda
  • Fumihiro Matsuo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2843)


Techniques of automatic extraction of related words are of great importance in many applications such as query expansion and automatic thesaurus construction. In this paper, a method of extracting related words is proposed basing on the statistical information about the co-occurrences of words from huge corpora. The mutual information is one of such statistical measures and has been used for application mainly in natural language processing. A drawback is, however, the mutual information depends mainly on frequencies of words. To overcome this difficulty, we propose as a new measure a normalize deviation of mutual information. We also reveal a correspondence between word ambiguity and related words using word relation graphs constructed using this measure.


Mutual Information Word Frequency Natural Language Processing Related Word Query Expansion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Voorhees, E.M.: On expanding query vectors with lexically related words. In: Proceedings of the Second Text Retrieval Conference, pp. 223–231 (1994)Google Scholar
  2. 2.
    Jing, Y., Croft, B.: An association thesaurus for information retrieval. In: Proceedings of RIAO, pp. 146–160 (1994)Google Scholar
  3. 3.
    Fellbaum, C.: WordNet: An electronic lexical database. MIT press, Cambridge (1998)zbMATHGoogle Scholar
  4. 4.
    Lin, D., Pantel, P.: DIRT - Discovery of Inference Rules from Text. In: Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2001, pp. 323–328 (2001)Google Scholar
  5. 5.
    Church, K.W., Hanks, P.: Word association norms, mutual information, and lexicography. Computational Linguistics 16(1), 22–29 (1990)Google Scholar
  6. 6.
    Dunning, T.: Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics 19(1), 61–74 (1993)Google Scholar
  7. 7.
    Aizawa, A.: The Feature Quantity: An Information Theoretic Perspective of Tfidf-like Measures. In: Proceeding of ACM SIGIR 2000, pp. 104–111 (2000)Google Scholar
  8. 8.
    Ohsawa, Y., Benson, N.E., Tachida, M.: KeyGraph: Automatic Indexing by Co-occurrence Graph based on Building Construction Metaphor. In: Proceeding of IEEE Advanced Digital Library Conference, pp. 12–18 (1999)Google Scholar
  9. 9.
    Matsuo, Y., Ishizuka, M.: Keyword Extraction from a Single Document usingWord Co-occurrence Statistical Information. In: Proceeding of 16th Int’l FLAIRS Conference, pp. 392–396 (2003)Google Scholar
  10. 10.
    Widdows, D., Dorow, B.: A Graph Model for Unsupervised Lexical Acquisition. In: 19th International Conference on Computational Linguistics, pp. 1093– 1099 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Tomohiko Sugimachi
    • 1
  • Akira Ishino
    • 1
  • Masayuki Takeda
    • 1
  • Fumihiro Matsuo
    • 1
  1. 1.Department of InformaticsKyushu UniversityFukuokaJAPAN

Personalised recommendations