Vector Space Models for Search and Cluster Mining

  • Mei Kobayashi
  • Masaki Aono


This chapter consists of two parts: a review of search and cluster mining algorithms based on vector space modeling followed by a description of a prototype search and cluster mining system. In the review, we consider Latent Semantic Indexing (LSI), a method based on the Singular Value Decomposition (SVD) of the document attribute matrix and Principal Component Analysis (PCA) of the document vector covariance matrix. In the second part, we present novel techniques for mining major and minor clusters from massive databases based on enhancements of LSI and PCA and automatic labeling of clusters based on their document contents. Most mining systems have been designed to find major clusters and they often fail to report information on smaller minor clusters. Minor cluster identification is important in many business applications, such as detection of credit card fraud, profile analysis, and scientific data analysis. Another novel feature of our method is the recognition and preservation of naturally occurring overlaps among clusters. Cluster overlap analysis is important for multiperspective analysis of databases. Results from implementation studies with a prototype system using over 100,000 news articles demonstrate the effectiveness of search and clustering engines.


Singular Value Decomposition Major Cluster Vector Space Modeling Latent Semantic Indexing Document Vector 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [AL01]
    R. Ando and L. Lee.Latent semantic space.In Proceedings of the ACM Special Interest Group for Information Retrieval (SIGIR) Conference, Helsinki, Finland, pages 154–162, 2001.Google Scholar
  2. [And00]
    R. Ando.Latent semantic space.In Proceedings of the ACM Special Interest Group for Information Retrieval (SIGIR) Conference, Athens, pages 216–223, 2000.Google Scholar
  3. [BDJ99]
    M. Berry, Z. Drmac, and E. Jessup.Matrices, vector spaces, and information retrieval.SIAM Review, 41 (2): 335–362, 1999.MATHGoogle Scholar
  4. [BDO95]
    M. Berry, S. Dumais, and G. O’Brien.Using linear algebra for intelligent information retrieval.SIAM Review, 37 (4): 573–595, 1995.MathSciNetMATHGoogle Scholar
  5. [BR01]
    K. Blom and A. Ruhe.Information retrieval using very short Krylov sequences.In Proceedings of the Computational Information Retrieval Conference held at North Carolina State University, Raleigh, Oct. 22, 2000, M. Berry, ed., SIAM, Philadelphia, pages 39–52, 2001.Google Scholar
  6. [DDF+90]
    S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman.Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41 (6): 391–407, 1990.CrossRefGoogle Scholar
  7. [De02]
    I. Dhillon and J. Kogan (eds.).Proceedings of the Workshop on Clustering High Dimensional Data and its Applications.SIAM, Philadelphia, 2002.Google Scholar
  8. Dem97] J. Demmel.Applied Numerical Linear Algebra.SIAM,Philadelphia, 1997.Google Scholar
  9. [EY39]
    C. Eckart and G. Young. A principal axis transformation for non-Hermitian matrices. Bulletin of the American Mathematics Society, 45: 118–121, 1939.MathSciNetCrossRefGoogle Scholar
  10. [GV96]
    G. Golub and C. Van Loan.Matrix Computations, third edition. John Hopkins Univ. Press, Baltimore, MD, 1996.Google Scholar
  11. [Har99]
    D. Harman.Ranking algorithms.In Information Retrieval, R. Baeza-Yates and B. Ribeiro-Neto, eds., ACM, New York, pages 363–392, 1999.Google Scholar
  12. [Hay99]
    S. Haykin Neural Networks: A comprehensive foundation, second edition. Prentice-Hall, Upper Saddle River, NJ, 1999.Google Scholar
  13. [Hot33]
    H. Hotelling.Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24: 417–441, 1933.CrossRefGoogle Scholar
  14. [JD88]
    A. Jain and R. Dubes.Algorithms for Clustering Data.Prentice-Hall, Englewood Cliffs, NJ, 1988.Google Scholar
  15. [KA02]
    M. Kobayashi and M. Aono.Major and outlier cluster analysis using dynamic re-scaling of document vectors.In Proceedings of the SIAM Text Mining Workshop, Arlington, VA, SIAM, Philadelphia, pages 103–113, 2002.Google Scholar
  16. [KAST01]
    M. Kobayashi, M. Aono, H. Samukawa, and H. Takeuchi.Information retrieval apparatus for accurately detecting multiple outlier clusters.patent, filing, IBM Corporation, 2001.Google Scholar
  17. [KAST02]
    M. Kobayashi, M. Aono, H. Samukawa, and H. Takeuchi.Matrix computations for information retrieval and major and outlier cluster detection. Journal of Computational and Applied Mathematics, 149 (1): 119–129, 2002.MathSciNetMATHCrossRefGoogle Scholar
  18. [Kat96]
    S. Katz. Distribution of context words and phrases in text and language modeling.Natural Language Engineering, 2 (1): 15–59, 1996.CrossRefGoogle Scholar
  19. [KMS00]
    M. Kobayashi, L. Malassis, and H. Samukawa.Retrieval and ranking of documents from a database.patent, filing, IBM Corporation, 2000.Google Scholar
  20. [MKB79]
    K. Mardia, J. Kent, and J. Bibby.Multivariate Analysis.Academic, New York, 1979.Google Scholar
  21. [MS00]
    C. Manning and H. Schütze.Foundations of Statistical Natural Language Processing.MIT Press, Cambridge, MA, 2000.Google Scholar
  22. [Par97]
    B. Parlett.The Symmetric Eigenvalue Problem.SIAM, Philadelphia, 1997.Google Scholar
  23. [Pea01]
    K. Pearson. On lines and planes of closest fit to systems of points in space. The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series, 2: 559–572, 1901.CrossRefGoogle Scholar
  24. [PJR01]
    H. Park, M. Jeon, and J. Rosen.Lower dimensional representation of text data in vector space based information retrieval.In Proceedings of the Computational Information Retrieval Conference held at North Carolina State University, Raleigh, Oct. 22, 2000, M. Berry, ed., SIAM, Philadelphia, pages 3–24, 2001.Google Scholar
  25. [PJR03]
    H. Park, M. Jeon, and J.B. Rosen.Lower dimensional representation of text data based on centroids and least squares.BIT, 2003, to appear.Google Scholar
  26. [QOSG02]
    Y. Qu, G. Ostrouchov, N. Samatova, and A. Geist.Principal component analysis for dimension reduction in massive distributed data sets.In SIAM Workshop on High Performance Data Mining, S. Parthasarathy, H. Kargupta, V. Kumar, D. Skillicorn, and M. Zaki, eds., Arlington, VA, pages 7–18, 2002.Google Scholar
  27. [Ras92]
    E. Rasmussen.Clustering algorithms.In Information Retrieval, W. Frakes and R. Baeza-Yates, eds., Prentice-Hall, Englewood Cliffs, NJ, pages 419–442, 1992.Google Scholar
  28. [Sa171]
    G. Salton. The SMART Retrieval System.Prentice-Hall, Englewood Cliffs, NJ, 1971.Google Scholar
  29. [SY02]
    H. Sakano and K. Yamada.Horror story: The curse of dimensionality.lnformation Processing Society of Japan (IPSJ) Magazine,43(5):562–567, 2002.Google Scholar
  30. [WF99]
    I. Witten and E. Frank.Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations.Morgan Kaufmann, San Francisco, 1999.Google Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Mei Kobayashi
  • Masaki Aono

There are no affiliations available

Personalised recommendations